00:00:00.002 Started by upstream project "autotest-per-patch" build number 120610 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.093 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.120 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.154 Using shallow fetch with depth 1 00:00:00.154 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.154 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.178 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.707 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.718 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.730 Checking out Revision a704ed4d86859cb8cbec080c78b138476da6ee34 (FETCH_HEAD) 00:00:04.730 > git config core.sparsecheckout # timeout=10 00:00:04.741 > git read-tree -mu HEAD # timeout=10 00:00:04.757 > git checkout -f a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=5 00:00:04.772 Commit message: "packer: Insert post-processors only if at least one is defined" 00:00:04.772 > git rev-list --no-walk a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=10 00:00:04.849 [Pipeline] Start of Pipeline 00:00:04.859 [Pipeline] library 00:00:04.861 Loading library shm_lib@master 00:00:04.861 Library shm_lib@master is cached. Copying from home. 00:00:04.875 [Pipeline] node 00:00:19.877 Still waiting to schedule task 00:00:19.877 Waiting for next available executor on ‘vagrant-vm-host’ 00:14:48.471 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu20-vg-autotest_3 00:14:48.473 [Pipeline] { 00:14:48.487 [Pipeline] catchError 00:14:48.489 [Pipeline] { 00:14:48.505 [Pipeline] wrap 00:14:48.516 [Pipeline] { 00:14:48.527 [Pipeline] stage 00:14:48.530 [Pipeline] { (Prologue) 00:14:48.553 [Pipeline] echo 00:14:48.555 Node: VM-host-SM4 00:14:48.560 [Pipeline] cleanWs 00:14:48.568 [WS-CLEANUP] Deleting project workspace... 00:14:48.568 [WS-CLEANUP] Deferred wipeout is used... 00:14:48.574 [WS-CLEANUP] done 00:14:48.746 [Pipeline] setCustomBuildProperty 00:14:48.819 [Pipeline] nodesByLabel 00:14:48.820 Found a total of 1 nodes with the 'sorcerer' label 00:14:48.831 [Pipeline] httpRequest 00:14:48.834 HttpMethod: GET 00:14:48.835 URL: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:14:48.835 Sending request to url: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:14:48.837 Response Code: HTTP/1.1 200 OK 00:14:48.838 Success: Status code 200 is in the accepted range: 200,404 00:14:48.838 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_3/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:14:48.975 [Pipeline] sh 00:14:49.255 + tar --no-same-owner -xf jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:14:49.274 [Pipeline] httpRequest 00:14:49.278 HttpMethod: GET 00:14:49.279 URL: http://10.211.164.101/packages/spdk_99b3305a57090397d476627a0fbcaca26b7cfada.tar.gz 00:14:49.279 Sending request to url: http://10.211.164.101/packages/spdk_99b3305a57090397d476627a0fbcaca26b7cfada.tar.gz 00:14:49.280 Response Code: HTTP/1.1 200 OK 00:14:49.281 Success: Status code 200 is in the accepted range: 200,404 00:14:49.281 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_3/spdk_99b3305a57090397d476627a0fbcaca26b7cfada.tar.gz 00:14:51.450 [Pipeline] sh 00:14:51.730 + tar --no-same-owner -xf spdk_99b3305a57090397d476627a0fbcaca26b7cfada.tar.gz 00:14:54.274 [Pipeline] sh 00:14:54.554 + git -C spdk log --oneline -n5 00:14:54.554 99b3305a5 nvmf/auth: Diffie-Hellman exchange support 00:14:54.554 f808ef364 nvmf/auth: add nvmf_auth_qpair_cleanup() 00:14:54.554 60b78ebde nvme/auth: make DH functions public 00:14:54.554 33fdd170e nvme/auth: get dhgroup from EVP_PKEY in nvme_auth_derive_secret() 00:14:54.554 a0b47b88d nvme/auth: split generating dhkey from getting pubkey 00:14:54.576 [Pipeline] writeFile 00:14:54.608 [Pipeline] sh 00:14:54.889 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:14:54.901 [Pipeline] sh 00:14:55.207 + cat autorun-spdk.conf 00:14:55.207 SPDK_TEST_UNITTEST=1 00:14:55.207 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:55.207 SPDK_TEST_NVME=1 00:14:55.207 SPDK_TEST_BLOCKDEV=1 00:14:55.207 SPDK_RUN_ASAN=1 00:14:55.207 SPDK_RUN_UBSAN=1 00:14:55.207 SPDK_TEST_RAID5=1 00:14:55.207 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:55.213 RUN_NIGHTLY=0 00:14:55.216 [Pipeline] } 00:14:55.234 [Pipeline] // stage 00:14:55.250 [Pipeline] stage 00:14:55.252 [Pipeline] { (Run VM) 00:14:55.268 [Pipeline] sh 00:14:55.550 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:14:55.550 + echo 'Start stage prepare_nvme.sh' 00:14:55.550 Start stage prepare_nvme.sh 00:14:55.550 + [[ -n 8 ]] 00:14:55.550 + disk_prefix=ex8 00:14:55.550 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest_3 ]] 00:14:55.550 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest_3/autorun-spdk.conf ]] 00:14:55.550 + source /var/jenkins/workspace/ubuntu20-vg-autotest_3/autorun-spdk.conf 00:14:55.550 ++ SPDK_TEST_UNITTEST=1 00:14:55.550 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:55.550 ++ SPDK_TEST_NVME=1 00:14:55.550 ++ SPDK_TEST_BLOCKDEV=1 00:14:55.550 ++ SPDK_RUN_ASAN=1 00:14:55.550 ++ SPDK_RUN_UBSAN=1 00:14:55.550 ++ SPDK_TEST_RAID5=1 00:14:55.550 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:55.550 ++ RUN_NIGHTLY=0 00:14:55.550 + cd /var/jenkins/workspace/ubuntu20-vg-autotest_3 00:14:55.550 + nvme_files=() 00:14:55.550 + declare -A nvme_files 00:14:55.550 + backend_dir=/var/lib/libvirt/images/backends 00:14:55.550 + nvme_files['nvme.img']=5G 00:14:55.550 + nvme_files['nvme-cmb.img']=5G 00:14:55.550 + nvme_files['nvme-multi0.img']=4G 00:14:55.550 + nvme_files['nvme-multi1.img']=4G 00:14:55.550 + nvme_files['nvme-multi2.img']=4G 00:14:55.550 + nvme_files['nvme-openstack.img']=8G 00:14:55.550 + nvme_files['nvme-zns.img']=5G 00:14:55.550 + (( SPDK_TEST_NVME_PMR == 1 )) 00:14:55.550 + (( SPDK_TEST_FTL == 1 )) 00:14:55.550 + (( SPDK_TEST_NVME_FDP == 1 )) 00:14:55.550 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:14:55.550 + for nvme in "${!nvme_files[@]}" 00:14:55.550 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:14:55.550 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:14:55.550 + for nvme in "${!nvme_files[@]}" 00:14:55.550 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:14:56.486 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:14:56.486 + for nvme in "${!nvme_files[@]}" 00:14:56.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:14:56.486 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:14:56.486 + for nvme in "${!nvme_files[@]}" 00:14:56.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:14:56.486 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:14:56.486 + for nvme in "${!nvme_files[@]}" 00:14:56.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:14:56.486 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:14:56.486 + for nvme in "${!nvme_files[@]}" 00:14:56.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:14:56.745 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:14:56.745 + for nvme in "${!nvme_files[@]}" 00:14:56.745 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:14:57.683 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:14:57.683 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:14:57.683 + echo 'End stage prepare_nvme.sh' 00:14:57.683 End stage prepare_nvme.sh 00:14:57.695 [Pipeline] sh 00:14:57.975 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:14:57.975 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -H -a -v -f ubuntu2004 00:14:57.975 00:14:57.975 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_3/spdk/scripts/vagrant 00:14:57.975 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_3/spdk 00:14:57.975 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest_3 00:14:57.975 HELP=0 00:14:57.975 DRY_RUN=0 00:14:57.975 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img, 00:14:57.975 NVME_DISKS_TYPE=nvme, 00:14:57.975 NVME_AUTO_CREATE=0 00:14:57.975 NVME_DISKS_NAMESPACES=, 00:14:57.975 NVME_CMB=, 00:14:57.975 NVME_PMR=, 00:14:57.975 NVME_ZNS=, 00:14:57.975 NVME_MS=, 00:14:57.975 NVME_FDP=, 00:14:57.975 SPDK_VAGRANT_DISTRO=ubuntu2004 00:14:57.975 SPDK_VAGRANT_VMCPU=10 00:14:57.975 SPDK_VAGRANT_VMRAM=12288 00:14:57.975 SPDK_VAGRANT_PROVIDER=libvirt 00:14:57.975 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:14:57.975 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:14:57.975 SPDK_OPENSTACK_NETWORK=0 00:14:57.975 VAGRANT_PACKAGE_BOX=0 00:14:57.975 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:14:57.975 FORCE_DISTRO=true 00:14:57.975 VAGRANT_BOX_VERSION= 00:14:57.975 EXTRA_VAGRANTFILES= 00:14:57.975 NIC_MODEL=e1000 00:14:57.975 00:14:57.975 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt' 00:14:57.975 /var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest_3 00:15:01.263 Bringing machine 'default' up with 'libvirt' provider... 00:15:01.828 ==> default: Creating image (snapshot of base box volume). 00:15:02.086 ==> default: Creating domain with the following settings... 00:15:02.086 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1713467297_92ce6301b6ce4b18f662 00:15:02.086 ==> default: -- Domain type: kvm 00:15:02.086 ==> default: -- Cpus: 10 00:15:02.086 ==> default: -- Feature: acpi 00:15:02.086 ==> default: -- Feature: apic 00:15:02.086 ==> default: -- Feature: pae 00:15:02.086 ==> default: -- Memory: 12288M 00:15:02.086 ==> default: -- Memory Backing: hugepages: 00:15:02.086 ==> default: -- Management MAC: 00:15:02.086 ==> default: -- Loader: 00:15:02.086 ==> default: -- Nvram: 00:15:02.086 ==> default: -- Base box: spdk/ubuntu2004 00:15:02.086 ==> default: -- Storage pool: default 00:15:02.086 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1713467297_92ce6301b6ce4b18f662.img (20G) 00:15:02.086 ==> default: -- Volume Cache: default 00:15:02.086 ==> default: -- Kernel: 00:15:02.086 ==> default: -- Initrd: 00:15:02.086 ==> default: -- Graphics Type: vnc 00:15:02.086 ==> default: -- Graphics Port: -1 00:15:02.086 ==> default: -- Graphics IP: 127.0.0.1 00:15:02.086 ==> default: -- Graphics Password: Not defined 00:15:02.086 ==> default: -- Video Type: cirrus 00:15:02.086 ==> default: -- Video VRAM: 9216 00:15:02.086 ==> default: -- Sound Type: 00:15:02.086 ==> default: -- Keymap: en-us 00:15:02.086 ==> default: -- TPM Path: 00:15:02.086 ==> default: -- INPUT: type=mouse, bus=ps2 00:15:02.086 ==> default: -- Command line args: 00:15:02.086 ==> default: -> value=-device, 00:15:02.086 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:15:02.086 ==> default: -> value=-drive, 00:15:02.086 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:15:02.087 ==> default: -> value=-device, 00:15:02.087 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:15:02.344 ==> default: Creating shared folders metadata... 00:15:02.344 ==> default: Starting domain. 00:15:04.249 ==> default: Waiting for domain to get an IP address... 00:15:14.313 ==> default: Waiting for SSH to become available... 00:15:15.685 ==> default: Configuring and enabling network interfaces... 00:15:18.215 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:15:23.505 ==> default: Mounting SSHFS shared folder... 00:15:23.764 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:15:23.764 ==> default: Checking Mount.. 00:15:26.301 ==> default: Checking Mount.. 00:15:26.301 ==> default: Folder Successfully Mounted! 00:15:26.301 ==> default: Running provisioner: file... 00:15:26.608 default: ~/.gitconfig => .gitconfig 00:15:26.608 00:15:26.608 SUCCESS! 00:15:26.608 00:15:26.608 cd to /var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:15:26.608 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:15:26.608 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt" to destroy all trace of vm. 00:15:26.608 00:15:26.618 [Pipeline] } 00:15:26.636 [Pipeline] // stage 00:15:26.644 [Pipeline] dir 00:15:26.644 Running in /var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt 00:15:26.646 [Pipeline] { 00:15:26.658 [Pipeline] catchError 00:15:26.659 [Pipeline] { 00:15:26.670 [Pipeline] sh 00:15:26.950 + vagrant ssh-config --host vagrant 00:15:26.950 + sed -ne /^Host/,$p+ 00:15:26.950 tee ssh_conf 00:15:31.169 Host vagrant 00:15:31.169 HostName 192.168.121.188 00:15:31.169 User vagrant 00:15:31.169 Port 22 00:15:31.169 UserKnownHostsFile /dev/null 00:15:31.169 StrictHostKeyChecking no 00:15:31.169 PasswordAuthentication no 00:15:31.169 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:15:31.169 IdentitiesOnly yes 00:15:31.169 LogLevel FATAL 00:15:31.169 ForwardAgent yes 00:15:31.169 ForwardX11 yes 00:15:31.169 00:15:31.182 [Pipeline] withEnv 00:15:31.184 [Pipeline] { 00:15:31.199 [Pipeline] sh 00:15:31.478 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:15:31.478 source /etc/os-release 00:15:31.478 [[ -e /image.version ]] && img=$(< /image.version) 00:15:31.478 # Minimal, systemd-like check. 00:15:31.478 if [[ -e /.dockerenv ]]; then 00:15:31.478 # Clear garbage from the node's name: 00:15:31.478 # agt-er_autotest_547-896 -> autotest_547-896 00:15:31.478 # $HOSTNAME is the actual container id 00:15:31.478 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:15:31.478 if mountpoint -q /etc/hostname; then 00:15:31.478 # We can assume this is a mount from a host where container is running, 00:15:31.478 # so fetch its hostname to easily identify the target swarm worker. 00:15:31.478 container="$(< /etc/hostname) ($agent)" 00:15:31.478 else 00:15:31.478 # Fallback 00:15:31.478 container=$agent 00:15:31.478 fi 00:15:31.478 fi 00:15:31.478 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:15:31.478 00:15:32.154 [Pipeline] } 00:15:32.174 [Pipeline] // withEnv 00:15:32.183 [Pipeline] setCustomBuildProperty 00:15:32.197 [Pipeline] stage 00:15:32.199 [Pipeline] { (Tests) 00:15:32.217 [Pipeline] sh 00:15:32.495 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:15:33.074 [Pipeline] timeout 00:15:33.074 Timeout set to expire in 1 hr 0 min 00:15:33.076 [Pipeline] { 00:15:33.092 [Pipeline] sh 00:15:33.371 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:15:34.306 HEAD is now at 99b3305a5 nvmf/auth: Diffie-Hellman exchange support 00:15:34.319 [Pipeline] sh 00:15:34.597 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:15:35.533 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:15:35.547 [Pipeline] sh 00:15:35.825 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:15:36.406 [Pipeline] sh 00:15:36.721 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:15:37.288 ++ readlink -f spdk_repo 00:15:37.288 + DIR_ROOT=/home/vagrant/spdk_repo 00:15:37.288 + [[ -n /home/vagrant/spdk_repo ]] 00:15:37.288 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:15:37.288 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:15:37.288 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:15:37.288 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:15:37.288 + [[ -d /home/vagrant/spdk_repo/output ]] 00:15:37.288 + cd /home/vagrant/spdk_repo 00:15:37.288 + source /etc/os-release 00:15:37.288 ++ NAME=Ubuntu 00:15:37.288 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:15:37.288 ++ ID=ubuntu 00:15:37.288 ++ ID_LIKE=debian 00:15:37.288 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:15:37.288 ++ VERSION_ID=20.04 00:15:37.288 ++ HOME_URL=https://www.ubuntu.com/ 00:15:37.288 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:15:37.288 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:15:37.288 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:15:37.288 ++ VERSION_CODENAME=focal 00:15:37.288 ++ UBUNTU_CODENAME=focal 00:15:37.288 + uname -a 00:15:37.288 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:15:37.288 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:37.288 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:15:37.547 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:15:37.547 Hugepages 00:15:37.547 node hugesize free / total 00:15:37.547 node0 1048576kB 0 / 0 00:15:37.547 node0 2048kB 0 / 0 00:15:37.547 00:15:37.547 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:37.547 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:15:37.547 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:15:37.547 + rm -f /tmp/spdk-ld-path 00:15:37.547 + source autorun-spdk.conf 00:15:37.547 ++ SPDK_TEST_UNITTEST=1 00:15:37.547 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:37.547 ++ SPDK_TEST_NVME=1 00:15:37.547 ++ SPDK_TEST_BLOCKDEV=1 00:15:37.547 ++ SPDK_RUN_ASAN=1 00:15:37.547 ++ SPDK_RUN_UBSAN=1 00:15:37.547 ++ SPDK_TEST_RAID5=1 00:15:37.547 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:37.547 ++ RUN_NIGHTLY=0 00:15:37.547 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:15:37.547 + [[ -n '' ]] 00:15:37.547 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:15:37.547 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:15:37.806 + for M in /var/spdk/build-*-manifest.txt 00:15:37.806 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:15:37.806 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:37.806 + for M in /var/spdk/build-*-manifest.txt 00:15:37.806 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:15:37.806 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:37.806 ++ uname 00:15:37.806 + [[ Linux == \L\i\n\u\x ]] 00:15:37.806 + sudo dmesg -T 00:15:37.806 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:15:37.806 + sudo dmesg --clear 00:15:37.806 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:15:37.806 + dmesg_pid=2344 00:15:37.806 + sudo dmesg -Tw 00:15:37.806 + [[ Ubuntu == FreeBSD ]] 00:15:37.806 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:37.806 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:37.806 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:15:37.806 + [[ -x /usr/src/fio-static/fio ]] 00:15:37.806 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:15:37.806 + [[ ! -v VFIO_QEMU_BIN ]] 00:15:37.806 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:15:37.806 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:15:37.806 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:15:37.806 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:15:37.806 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:15:37.806 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:15:37.806 Test configuration: 00:15:37.806 SPDK_TEST_UNITTEST=1 00:15:37.806 SPDK_RUN_FUNCTIONAL_TEST=1 00:15:37.806 SPDK_TEST_NVME=1 00:15:37.806 SPDK_TEST_BLOCKDEV=1 00:15:37.806 SPDK_RUN_ASAN=1 00:15:37.806 SPDK_RUN_UBSAN=1 00:15:37.806 SPDK_TEST_RAID5=1 00:15:37.806 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:37.806 RUN_NIGHTLY=0 19:08:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.806 19:08:52 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:15:37.806 19:08:52 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.806 19:08:52 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.806 19:08:52 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:15:37.806 19:08:52 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:15:37.806 19:08:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:15:37.806 19:08:52 -- paths/export.sh@5 -- $ export PATH 00:15:37.806 19:08:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:15:37.806 19:08:52 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:15:37.806 19:08:52 -- common/autobuild_common.sh@435 -- $ date +%s 00:15:37.806 19:08:52 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713467332.XXXXXX 00:15:37.806 19:08:52 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713467332.cddfDk 00:15:37.806 19:08:52 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:15:37.806 19:08:52 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:15:37.806 19:08:52 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:15:37.806 19:08:52 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:15:37.806 19:08:52 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:15:37.806 19:08:52 -- common/autobuild_common.sh@451 -- $ get_config_params 00:15:37.806 19:08:52 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:15:37.806 19:08:52 -- common/autotest_common.sh@10 -- $ set +x 00:15:37.806 19:08:52 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:15:37.806 19:08:52 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:15:37.806 19:08:52 -- pm/common@17 -- $ local monitor 00:15:37.806 19:08:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:37.806 19:08:52 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2380 00:15:37.806 19:08:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:37.806 19:08:52 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2381 00:15:37.806 19:08:52 -- pm/common@21 -- $ date +%s 00:15:37.806 19:08:52 -- pm/common@26 -- $ sleep 1 00:15:37.806 19:08:52 -- pm/common@21 -- $ date +%s 00:15:37.806 19:08:52 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713467332 00:15:37.806 19:08:52 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713467332 00:15:37.806 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:15:37.806 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:15:37.806 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713467332_collect-vmstat.pm.log 00:15:37.806 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713467332_collect-cpu-load.pm.log 00:15:39.182 19:08:53 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:15:39.182 19:08:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:15:39.182 19:08:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:15:39.182 19:08:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:15:39.182 19:08:53 -- spdk/autobuild.sh@16 -- $ date -u 00:15:39.182 Thu Apr 18 19:08:53 UTC 2024 00:15:39.182 19:08:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:15:39.182 v24.05-pre-442-g99b3305a5 00:15:39.182 19:08:53 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:15:39.182 19:08:53 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:15:39.182 19:08:53 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:15:39.182 19:08:53 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:15:39.182 19:08:53 -- common/autotest_common.sh@10 -- $ set +x 00:15:39.182 ************************************ 00:15:39.182 START TEST asan 00:15:39.182 ************************************ 00:15:39.182 using asan 00:15:39.182 19:08:53 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:15:39.182 00:15:39.182 real 0m0.000s 00:15:39.182 user 0m0.000s 00:15:39.182 sys 0m0.000s 00:15:39.182 19:08:53 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:15:39.182 ************************************ 00:15:39.182 END TEST asan 00:15:39.182 19:08:53 -- common/autotest_common.sh@10 -- $ set +x 00:15:39.182 ************************************ 00:15:39.182 19:08:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:15:39.182 19:08:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:15:39.182 19:08:54 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:15:39.182 19:08:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:15:39.182 19:08:54 -- common/autotest_common.sh@10 -- $ set +x 00:15:39.182 ************************************ 00:15:39.182 START TEST ubsan 00:15:39.182 ************************************ 00:15:39.182 using ubsan 00:15:39.182 19:08:54 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:15:39.182 00:15:39.182 real 0m0.000s 00:15:39.182 user 0m0.000s 00:15:39.182 sys 0m0.000s 00:15:39.182 19:08:54 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:15:39.182 19:08:54 -- common/autotest_common.sh@10 -- $ set +x 00:15:39.182 ************************************ 00:15:39.182 END TEST ubsan 00:15:39.182 ************************************ 00:15:39.182 19:08:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:15:39.183 19:08:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:15:39.183 19:08:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:15:39.183 19:08:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:15:39.183 19:08:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:15:39.183 19:08:54 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:15:39.183 19:08:54 -- spdk/autobuild.sh@58 -- $ unittest_build 00:15:39.183 19:08:54 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:15:39.183 19:08:54 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:15:39.183 19:08:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:15:39.183 19:08:54 -- common/autotest_common.sh@10 -- $ set +x 00:15:39.183 ************************************ 00:15:39.183 START TEST unittest_build 00:15:39.183 ************************************ 00:15:39.183 19:08:54 -- common/autotest_common.sh@1111 -- $ _unittest_build 00:15:39.183 19:08:54 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:15:39.183 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:39.183 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:39.751 Using 'verbs' RDMA provider 00:15:55.651 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:16:13.734 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:16:13.734 Creating mk/config.mk...done. 00:16:13.734 Creating mk/cc.flags.mk...done. 00:16:13.734 Type 'make' to build. 00:16:13.734 19:09:27 -- common/autobuild_common.sh@403 -- $ make -j10 00:16:13.734 make[1]: Nothing to be done for 'all'. 00:16:13.992 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.251 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.251 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.251 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.251 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.251 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.251 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.509 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.509 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.509 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.510 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.510 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.768 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.768 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.768 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:14.768 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.027 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.027 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.027 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.027 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.027 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.285 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.285 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.285 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.285 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.285 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.543 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.543 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.543 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.543 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.543 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.543 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.543 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.543 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.544 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.802 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.802 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.802 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.802 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.802 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.802 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:15.802 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.317 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.317 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.317 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.317 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.576 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.576 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.576 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.576 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.576 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.834 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.835 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.835 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.835 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.835 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.835 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:16.835 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.093 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.093 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.093 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.093 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.352 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.352 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.352 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.352 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.352 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.352 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.352 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.352 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.352 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.611 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.611 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.611 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.611 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.611 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.611 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.611 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.611 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.611 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:17.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.129 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.129 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.129 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.129 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.129 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.129 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.388 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.388 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.388 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.388 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.388 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.647 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.647 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.647 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.647 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:18.905 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.163 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.421 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.421 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.421 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.421 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.421 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.421 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.421 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.680 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.680 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.680 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.938 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.938 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.938 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:19.938 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.197 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.197 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.197 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.197 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.455 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.455 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.455 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.455 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.455 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.455 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.455 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.712 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.712 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.712 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.712 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.712 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.712 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.712 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.712 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.712 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.971 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.971 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.971 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.971 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.971 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.971 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.971 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.971 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:20.971 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.229 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.229 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.229 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.229 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.229 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.229 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.487 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.487 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.487 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.799 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:21.799 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.057 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.057 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.057 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.057 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.316 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.316 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.316 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.316 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.316 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.574 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.574 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.574 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.574 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.574 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.574 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.574 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.833 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.833 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.833 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.833 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:22.833 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.091 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.092 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.092 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.092 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.092 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.092 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.350 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.350 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.350 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.350 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.350 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.350 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.350 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.350 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.607 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.607 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.607 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.866 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.866 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.866 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:23.866 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.124 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.124 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.124 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.124 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.124 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.124 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.382 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.382 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.382 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.383 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.641 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.641 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.641 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.898 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.898 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.898 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:24.898 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:25.465 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:25.465 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:25.723 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:25.723 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:25.723 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:26.288 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:26.288 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:26.546 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:26.546 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:26.805 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:26.805 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.064 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.064 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.322 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.322 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.322 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.322 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.322 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.580 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.580 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.580 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.837 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:27.837 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.096 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.096 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.096 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.096 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.096 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.355 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.355 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.355 The Meson build system 00:16:28.355 Version: 1.4.0 00:16:28.355 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:16:28.355 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:16:28.355 Build type: native build 00:16:28.355 Program cat found: YES (/usr/bin/cat) 00:16:28.355 Project name: DPDK 00:16:28.355 Project version: 23.11.0 00:16:28.355 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:16:28.355 C linker for the host machine: cc ld.bfd 2.34 00:16:28.355 Host machine cpu family: x86_64 00:16:28.355 Host machine cpu: x86_64 00:16:28.355 Message: ## Building in Developer Mode ## 00:16:28.355 Program pkg-config found: YES (/usr/bin/pkg-config) 00:16:28.355 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:16:28.355 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:16:28.355 Program python3 found: YES (/usr/bin/python3) 00:16:28.355 Program cat found: YES (/usr/bin/cat) 00:16:28.355 Compiler for C supports arguments -march=native: YES 00:16:28.355 Checking for size of "void *" : 8 00:16:28.355 Checking for size of "void *" : 8 (cached) 00:16:28.355 Library m found: YES 00:16:28.355 Library numa found: YES 00:16:28.355 Has header "numaif.h" : YES 00:16:28.355 Library fdt found: NO 00:16:28.355 Library execinfo found: NO 00:16:28.355 Has header "execinfo.h" : YES 00:16:28.355 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:16:28.355 Run-time dependency libarchive found: NO (tried pkgconfig) 00:16:28.355 Run-time dependency libbsd found: NO (tried pkgconfig) 00:16:28.355 Run-time dependency jansson found: NO (tried pkgconfig) 00:16:28.355 Run-time dependency openssl found: YES 1.1.1f 00:16:28.355 Run-time dependency libpcap found: NO (tried pkgconfig) 00:16:28.355 Library pcap found: NO 00:16:28.355 Compiler for C supports arguments -Wcast-qual: YES 00:16:28.355 Compiler for C supports arguments -Wdeprecated: YES 00:16:28.355 Compiler for C supports arguments -Wformat: YES 00:16:28.355 Compiler for C supports arguments -Wformat-nonliteral: YES 00:16:28.355 Compiler for C supports arguments -Wformat-security: YES 00:16:28.355 Compiler for C supports arguments -Wmissing-declarations: YES 00:16:28.355 Compiler for C supports arguments -Wmissing-prototypes: YES 00:16:28.355 Compiler for C supports arguments -Wnested-externs: YES 00:16:28.355 Compiler for C supports arguments -Wold-style-definition: YES 00:16:28.355 Compiler for C supports arguments -Wpointer-arith: YES 00:16:28.355 Compiler for C supports arguments -Wsign-compare: YES 00:16:28.355 Compiler for C supports arguments -Wstrict-prototypes: YES 00:16:28.355 Compiler for C supports arguments -Wundef: YES 00:16:28.355 Compiler for C supports arguments -Wwrite-strings: YES 00:16:28.355 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:16:28.355 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:16:28.355 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:16:28.355 Program objdump found: YES (/usr/bin/objdump) 00:16:28.355 Compiler for C supports arguments -mavx512f: YES 00:16:28.355 Checking if "AVX512 checking" compiles: YES 00:16:28.355 Fetching value of define "__SSE4_2__" : 1 00:16:28.355 Fetching value of define "__AES__" : 1 00:16:28.355 Fetching value of define "__AVX__" : 1 00:16:28.355 Fetching value of define "__AVX2__" : 1 00:16:28.355 Fetching value of define "__AVX512BW__" : 1 00:16:28.355 Fetching value of define "__AVX512CD__" : 1 00:16:28.355 Fetching value of define "__AVX512DQ__" : 1 00:16:28.355 Fetching value of define "__AVX512F__" : 1 00:16:28.355 Fetching value of define "__AVX512VL__" : 1 00:16:28.355 Fetching value of define "__PCLMUL__" : 1 00:16:28.355 Fetching value of define "__RDRND__" : 1 00:16:28.355 Fetching value of define "__RDSEED__" : 1 00:16:28.355 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:16:28.355 Fetching value of define "__znver1__" : (undefined) 00:16:28.355 Fetching value of define "__znver2__" : (undefined) 00:16:28.355 Fetching value of define "__znver3__" : (undefined) 00:16:28.355 Fetching value of define "__znver4__" : (undefined) 00:16:28.355 Library asan found: YES 00:16:28.355 Compiler for C supports arguments -Wno-format-truncation: YES 00:16:28.355 Message: lib/log: Defining dependency "log" 00:16:28.355 Message: lib/kvargs: Defining dependency "kvargs" 00:16:28.355 Message: lib/telemetry: Defining dependency "telemetry" 00:16:28.355 Library rt found: YES 00:16:28.355 Checking for function "getentropy" : NO 00:16:28.355 Message: lib/eal: Defining dependency "eal" 00:16:28.355 Message: lib/ring: Defining dependency "ring" 00:16:28.355 Message: lib/rcu: Defining dependency "rcu" 00:16:28.355 Message: lib/mempool: Defining dependency "mempool" 00:16:28.355 Message: lib/mbuf: Defining dependency "mbuf" 00:16:28.355 Fetching value of define "__PCLMUL__" : 1 (cached) 00:16:28.355 Fetching value of define "__AVX512F__" : 1 (cached) 00:16:28.355 Fetching value of define "__AVX512BW__" : 1 (cached) 00:16:28.355 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:16:28.355 Fetching value of define "__AVX512VL__" : 1 (cached) 00:16:28.355 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:16:28.355 Compiler for C supports arguments -mpclmul: YES 00:16:28.355 Compiler for C supports arguments -maes: YES 00:16:28.355 Compiler for C supports arguments -mavx512f: YES (cached) 00:16:28.355 Compiler for C supports arguments -mavx512bw: YES 00:16:28.355 Compiler for C supports arguments -mavx512dq: YES 00:16:28.355 Compiler for C supports arguments -mavx512vl: YES 00:16:28.355 Compiler for C supports arguments -mvpclmulqdq: YES 00:16:28.355 Compiler for C supports arguments -mavx2: YES 00:16:28.355 Compiler for C supports arguments -mavx: YES 00:16:28.355 Message: lib/net: Defining dependency "net" 00:16:28.355 Message: lib/meter: Defining dependency "meter" 00:16:28.355 Message: lib/ethdev: Defining dependency "ethdev" 00:16:28.355 Message: lib/pci: Defining dependency "pci" 00:16:28.355 Message: lib/cmdline: Defining dependency "cmdline" 00:16:28.355 Message: lib/hash: Defining dependency "hash" 00:16:28.355 Message: lib/timer: Defining dependency "timer" 00:16:28.355 Message: lib/compressdev: Defining dependency "compressdev" 00:16:28.355 Message: lib/cryptodev: Defining dependency "cryptodev" 00:16:28.355 Message: lib/dmadev: Defining dependency "dmadev" 00:16:28.355 Compiler for C supports arguments -Wno-cast-qual: YES 00:16:28.355 Message: lib/power: Defining dependency "power" 00:16:28.355 Message: lib/reorder: Defining dependency "reorder" 00:16:28.355 Message: lib/security: Defining dependency "security" 00:16:28.355 Has header "linux/userfaultfd.h" : YES 00:16:28.355 Has header "linux/vduse.h" : NO 00:16:28.355 Message: lib/vhost: Defining dependency "vhost" 00:16:28.355 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:16:28.355 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:16:28.355 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:16:28.355 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:16:28.355 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:16:28.355 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:16:28.355 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:16:28.355 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:16:28.355 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:16:28.355 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:16:28.355 Program doxygen found: YES (/usr/bin/doxygen) 00:16:28.355 Configuring doxy-api-html.conf using configuration 00:16:28.355 Configuring doxy-api-man.conf using configuration 00:16:28.355 Program mandb found: YES (/usr/bin/mandb) 00:16:28.355 Program sphinx-build found: NO 00:16:28.355 Configuring rte_build_config.h using configuration 00:16:28.355 Message: 00:16:28.355 ================= 00:16:28.355 Applications Enabled 00:16:28.355 ================= 00:16:28.355 00:16:28.355 apps: 00:16:28.355 00:16:28.355 00:16:28.355 Message: 00:16:28.355 ================= 00:16:28.355 Libraries Enabled 00:16:28.355 ================= 00:16:28.355 00:16:28.356 libs: 00:16:28.356 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:16:28.356 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:16:28.356 cryptodev, dmadev, power, reorder, security, vhost, 00:16:28.356 00:16:28.356 Message: 00:16:28.356 =============== 00:16:28.356 Drivers Enabled 00:16:28.356 =============== 00:16:28.356 00:16:28.356 common: 00:16:28.356 00:16:28.356 bus: 00:16:28.356 pci, vdev, 00:16:28.356 mempool: 00:16:28.356 ring, 00:16:28.356 dma: 00:16:28.356 00:16:28.356 net: 00:16:28.356 00:16:28.356 crypto: 00:16:28.356 00:16:28.356 compress: 00:16:28.356 00:16:28.356 vdpa: 00:16:28.356 00:16:28.356 00:16:28.356 Message: 00:16:28.356 ================= 00:16:28.356 Content Skipped 00:16:28.356 ================= 00:16:28.356 00:16:28.356 apps: 00:16:28.356 dumpcap: explicitly disabled via build config 00:16:28.356 graph: explicitly disabled via build config 00:16:28.356 pdump: explicitly disabled via build config 00:16:28.356 proc-info: explicitly disabled via build config 00:16:28.356 test-acl: explicitly disabled via build config 00:16:28.356 test-bbdev: explicitly disabled via build config 00:16:28.356 test-cmdline: explicitly disabled via build config 00:16:28.356 test-compress-perf: explicitly disabled via build config 00:16:28.356 test-crypto-perf: explicitly disabled via build config 00:16:28.356 test-dma-perf: explicitly disabled via build config 00:16:28.356 test-eventdev: explicitly disabled via build config 00:16:28.356 test-fib: explicitly disabled via build config 00:16:28.356 test-flow-perf: explicitly disabled via build config 00:16:28.356 test-gpudev: explicitly disabled via build config 00:16:28.356 test-mldev: explicitly disabled via build config 00:16:28.356 test-pipeline: explicitly disabled via build config 00:16:28.356 test-pmd: explicitly disabled via build config 00:16:28.356 test-regex: explicitly disabled via build config 00:16:28.356 test-sad: explicitly disabled via build config 00:16:28.356 test-security-perf: explicitly disabled via build config 00:16:28.356 00:16:28.356 libs: 00:16:28.356 metrics: explicitly disabled via build config 00:16:28.356 acl: explicitly disabled via build config 00:16:28.356 bbdev: explicitly disabled via build config 00:16:28.356 bitratestats: explicitly disabled via build config 00:16:28.356 bpf: explicitly disabled via build config 00:16:28.356 cfgfile: explicitly disabled via build config 00:16:28.356 distributor: explicitly disabled via build config 00:16:28.356 efd: explicitly disabled via build config 00:16:28.356 eventdev: explicitly disabled via build config 00:16:28.356 dispatcher: explicitly disabled via build config 00:16:28.356 gpudev: explicitly disabled via build config 00:16:28.356 gro: explicitly disabled via build config 00:16:28.356 gso: explicitly disabled via build config 00:16:28.356 ip_frag: explicitly disabled via build config 00:16:28.356 jobstats: explicitly disabled via build config 00:16:28.356 latencystats: explicitly disabled via build config 00:16:28.356 lpm: explicitly disabled via build config 00:16:28.356 member: explicitly disabled via build config 00:16:28.356 pcapng: explicitly disabled via build config 00:16:28.356 rawdev: explicitly disabled via build config 00:16:28.356 regexdev: explicitly disabled via build config 00:16:28.356 mldev: explicitly disabled via build config 00:16:28.356 rib: explicitly disabled via build config 00:16:28.356 sched: explicitly disabled via build config 00:16:28.356 stack: explicitly disabled via build config 00:16:28.356 ipsec: explicitly disabled via build config 00:16:28.356 pdcp: explicitly disabled via build config 00:16:28.356 fib: explicitly disabled via build config 00:16:28.356 port: explicitly disabled via build config 00:16:28.356 pdump: explicitly disabled via build config 00:16:28.356 table: explicitly disabled via build config 00:16:28.356 pipeline: explicitly disabled via build config 00:16:28.356 graph: explicitly disabled via build config 00:16:28.356 node: explicitly disabled via build config 00:16:28.356 00:16:28.356 drivers: 00:16:28.356 common/cpt: not in enabled drivers build config 00:16:28.356 common/dpaax: not in enabled drivers build config 00:16:28.356 common/iavf: not in enabled drivers build config 00:16:28.356 common/idpf: not in enabled drivers build config 00:16:28.356 common/mvep: not in enabled drivers build config 00:16:28.356 common/octeontx: not in enabled drivers build config 00:16:28.356 bus/auxiliary: not in enabled drivers build config 00:16:28.356 bus/cdx: not in enabled drivers build config 00:16:28.356 bus/dpaa: not in enabled drivers build config 00:16:28.356 bus/fslmc: not in enabled drivers build config 00:16:28.356 bus/ifpga: not in enabled drivers build config 00:16:28.356 bus/platform: not in enabled drivers build config 00:16:28.356 bus/vmbus: not in enabled drivers build config 00:16:28.356 common/cnxk: not in enabled drivers build config 00:16:28.356 common/mlx5: not in enabled drivers build config 00:16:28.356 common/nfp: not in enabled drivers build config 00:16:28.356 common/qat: not in enabled drivers build config 00:16:28.356 common/sfc_efx: not in enabled drivers build config 00:16:28.356 mempool/bucket: not in enabled drivers build config 00:16:28.356 mempool/cnxk: not in enabled drivers build config 00:16:28.356 mempool/dpaa: not in enabled drivers build config 00:16:28.356 mempool/dpaa2: not in enabled drivers build config 00:16:28.356 mempool/octeontx: not in enabled drivers build config 00:16:28.356 mempool/stack: not in enabled drivers build config 00:16:28.356 dma/cnxk: not in enabled drivers build config 00:16:28.356 dma/dpaa: not in enabled drivers build config 00:16:28.356 dma/dpaa2: not in enabled drivers build config 00:16:28.356 dma/hisilicon: not in enabled drivers build config 00:16:28.356 dma/idxd: not in enabled drivers build config 00:16:28.356 dma/ioat: not in enabled drivers build config 00:16:28.356 dma/skeleton: not in enabled drivers build config 00:16:28.356 net/af_packet: not in enabled drivers build config 00:16:28.356 net/af_xdp: not in enabled drivers build config 00:16:28.356 net/ark: not in enabled drivers build config 00:16:28.356 net/atlantic: not in enabled drivers build config 00:16:28.356 net/avp: not in enabled drivers build config 00:16:28.356 net/axgbe: not in enabled drivers build config 00:16:28.356 net/bnx2x: not in enabled drivers build config 00:16:28.356 net/bnxt: not in enabled drivers build config 00:16:28.356 net/bonding: not in enabled drivers build config 00:16:28.356 net/cnxk: not in enabled drivers build config 00:16:28.356 net/cpfl: not in enabled drivers build config 00:16:28.356 net/cxgbe: not in enabled drivers build config 00:16:28.356 net/dpaa: not in enabled drivers build config 00:16:28.356 net/dpaa2: not in enabled drivers build config 00:16:28.356 net/e1000: not in enabled drivers build config 00:16:28.356 net/ena: not in enabled drivers build config 00:16:28.356 net/enetc: not in enabled drivers build config 00:16:28.356 net/enetfec: not in enabled drivers build config 00:16:28.356 net/enic: not in enabled drivers build config 00:16:28.356 net/failsafe: not in enabled drivers build config 00:16:28.356 net/fm10k: not in enabled drivers build config 00:16:28.356 net/gve: not in enabled drivers build config 00:16:28.356 net/hinic: not in enabled drivers build config 00:16:28.356 net/hns3: not in enabled drivers build config 00:16:28.356 net/i40e: not in enabled drivers build config 00:16:28.356 net/iavf: not in enabled drivers build config 00:16:28.356 net/ice: not in enabled drivers build config 00:16:28.356 net/idpf: not in enabled drivers build config 00:16:28.356 net/igc: not in enabled drivers build config 00:16:28.356 net/ionic: not in enabled drivers build config 00:16:28.356 net/ipn3ke: not in enabled drivers build config 00:16:28.356 net/ixgbe: not in enabled drivers build config 00:16:28.356 net/mana: not in enabled drivers build config 00:16:28.356 net/memif: not in enabled drivers build config 00:16:28.356 net/mlx4: not in enabled drivers build config 00:16:28.356 net/mlx5: not in enabled drivers build config 00:16:28.356 net/mvneta: not in enabled drivers build config 00:16:28.356 net/mvpp2: not in enabled drivers build config 00:16:28.356 net/netvsc: not in enabled drivers build config 00:16:28.356 net/nfb: not in enabled drivers build config 00:16:28.356 net/nfp: not in enabled drivers build config 00:16:28.356 net/ngbe: not in enabled drivers build config 00:16:28.356 net/null: not in enabled drivers build config 00:16:28.356 net/octeontx: not in enabled drivers build config 00:16:28.356 net/octeon_ep: not in enabled drivers build config 00:16:28.356 net/pcap: not in enabled drivers build config 00:16:28.356 net/pfe: not in enabled drivers build config 00:16:28.356 net/qede: not in enabled drivers build config 00:16:28.356 net/ring: not in enabled drivers build config 00:16:28.356 net/sfc: not in enabled drivers build config 00:16:28.356 net/softnic: not in enabled drivers build config 00:16:28.356 net/tap: not in enabled drivers build config 00:16:28.356 net/thunderx: not in enabled drivers build config 00:16:28.356 net/txgbe: not in enabled drivers build config 00:16:28.356 net/vdev_netvsc: not in enabled drivers build config 00:16:28.356 net/vhost: not in enabled drivers build config 00:16:28.356 net/virtio: not in enabled drivers build config 00:16:28.356 net/vmxnet3: not in enabled drivers build config 00:16:28.356 raw/*: missing internal dependency, "rawdev" 00:16:28.356 crypto/armv8: not in enabled drivers build config 00:16:28.356 crypto/bcmfs: not in enabled drivers build config 00:16:28.356 crypto/caam_jr: not in enabled drivers build config 00:16:28.356 crypto/ccp: not in enabled drivers build config 00:16:28.356 crypto/cnxk: not in enabled drivers build config 00:16:28.356 crypto/dpaa_sec: not in enabled drivers build config 00:16:28.356 crypto/dpaa2_sec: not in enabled drivers build config 00:16:28.356 crypto/ipsec_mb: not in enabled drivers build config 00:16:28.356 crypto/mlx5: not in enabled drivers build config 00:16:28.356 crypto/mvsam: not in enabled drivers build config 00:16:28.356 crypto/nitrox: not in enabled drivers build config 00:16:28.356 crypto/null: not in enabled drivers build config 00:16:28.356 crypto/octeontx: not in enabled drivers build config 00:16:28.356 crypto/openssl: not in enabled drivers build config 00:16:28.356 crypto/scheduler: not in enabled drivers build config 00:16:28.356 crypto/uadk: not in enabled drivers build config 00:16:28.356 crypto/virtio: not in enabled drivers build config 00:16:28.356 compress/isal: not in enabled drivers build config 00:16:28.357 compress/mlx5: not in enabled drivers build config 00:16:28.357 compress/octeontx: not in enabled drivers build config 00:16:28.357 compress/zlib: not in enabled drivers build config 00:16:28.357 regex/*: missing internal dependency, "regexdev" 00:16:28.357 ml/*: missing internal dependency, "mldev" 00:16:28.357 vdpa/ifc: not in enabled drivers build config 00:16:28.357 vdpa/mlx5: not in enabled drivers build config 00:16:28.357 vdpa/nfp: not in enabled drivers build config 00:16:28.357 vdpa/sfc: not in enabled drivers build config 00:16:28.357 event/*: missing internal dependency, "eventdev" 00:16:28.357 baseband/*: missing internal dependency, "bbdev" 00:16:28.357 gpu/*: missing internal dependency, "gpudev" 00:16:28.357 00:16:28.357 00:16:28.357 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.615 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.615 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.615 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.873 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.873 Build targets in project: 85 00:16:28.873 00:16:28.873 DPDK 23.11.0 00:16:28.873 00:16:28.873 User defined options 00:16:28.873 buildtype : debug 00:16:28.873 default_library : static 00:16:28.873 libdir : lib 00:16:28.873 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:28.873 b_sanitize : address 00:16:28.873 c_args : -fPIC -Werror 00:16:28.873 c_link_args : 00:16:28.873 cpu_instruction_set: native 00:16:28.873 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:16:28.873 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:16:28.873 enable_docs : false 00:16:28.873 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:16:28.873 enable_kmods : false 00:16:28.873 tests : false 00:16:28.873 00:16:28.873 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:16:28.873 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:28.873 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:29.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:29.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:29.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:29.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:29.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:29.133 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:16:29.391 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:16:29.391 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:16:29.391 [3/264] Linking static target lib/librte_kvargs.a 00:16:29.391 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:16:29.391 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:16:29.391 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:16:29.391 [7/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:16:29.391 [8/264] Linking static target lib/librte_log.a 00:16:29.391 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:16:29.650 [10/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:16:29.650 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:16:29.650 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:16:29.650 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:16:29.650 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:16:29.650 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:16:29.650 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:29.650 [16/264] Linking static target lib/librte_telemetry.a 00:16:29.650 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:16:29.650 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:29.909 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:16:29.909 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:16:29.909 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:16:29.909 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:16:29.909 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:16:29.909 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:29.909 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:16:29.909 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:16:29.909 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:30.167 [25/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:16:30.167 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:16:30.167 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:16:30.167 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:16:30.167 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:30.167 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:16:30.167 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:30.167 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:16:30.167 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:16:30.167 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:16:30.426 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:16:30.426 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:30.426 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:16:30.426 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:16:30.426 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:30.426 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:16:30.426 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:16:30.426 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:16:30.426 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:16:30.426 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:16:30.426 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:16:30.426 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:30.426 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:16:30.684 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:16:30.684 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:16:30.684 [44/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:16:30.684 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:16:30.684 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:16:30.684 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:16:30.684 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:16:30.684 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:16:30.943 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:16:30.943 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:16:30.943 [52/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:16:30.943 [53/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:16:30.943 [54/264] Linking target lib/librte_log.so.24.0 00:16:30.943 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:16:30.943 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:16:30.943 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:16:30.943 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:16:30.943 [59/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:16:30.943 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:16:30.943 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:16:30.943 [62/264] Linking target lib/librte_kvargs.so.24.0 00:16:30.943 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:16:30.943 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:16:30.943 [65/264] Linking target lib/librte_telemetry.so.24.0 00:16:31.202 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:16:31.202 [67/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:16:31.202 [68/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:16:31.202 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:16:31.202 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:16:31.202 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:16:31.202 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:16:31.202 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:16:31.202 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:16:31.202 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:16:31.202 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:16:31.202 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:16:31.461 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:16:31.461 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:16:31.461 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:16:31.461 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:16:31.461 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:16:31.461 [83/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:16:31.461 [84/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:16:31.461 [85/264] Linking static target lib/librte_ring.a 00:16:31.461 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:16:31.719 [87/264] Linking static target lib/librte_eal.a 00:16:31.719 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:16:31.719 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:16:31.719 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:16:31.719 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:16:31.719 [92/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:16:31.719 [93/264] Linking static target lib/librte_mempool.a 00:16:31.719 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:16:31.719 [95/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:16:31.719 [96/264] Linking static target lib/librte_rcu.a 00:16:31.977 [97/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:16:31.977 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:16:31.977 [99/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:16:31.977 [100/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:16:31.977 [101/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:16:31.977 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:16:32.235 [103/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:16:32.235 [104/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:16:32.235 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:16:32.235 [106/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:16:32.235 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:16:32.235 [108/264] Linking static target lib/librte_net.a 00:16:32.235 [109/264] Linking static target lib/librte_meter.a 00:16:32.235 [110/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:16:32.493 [111/264] Linking static target lib/librte_mbuf.a 00:16:32.493 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:16:32.493 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:16:32.493 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:16:32.494 [115/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:16:32.494 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:16:32.494 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:16:32.494 [118/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:16:32.795 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:16:32.795 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:16:32.795 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:16:33.077 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:16:33.077 [123/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:16:33.077 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:16:33.077 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:16:33.077 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:16:33.077 [127/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:16:33.077 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:16:33.077 [129/264] Linking static target lib/librte_pci.a 00:16:33.077 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:16:33.335 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:16:33.335 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:16:33.335 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:16:33.335 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:16:33.335 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:16:33.335 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:16:33.335 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:16:33.335 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:16:33.335 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:16:33.335 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:16:33.335 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:16:33.335 [142/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:16:33.335 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:16:33.335 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:16:33.594 [145/264] Linking static target lib/librte_cmdline.a 00:16:33.594 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:16:33.594 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:16:33.852 [148/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:16:33.852 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:16:33.852 [150/264] Linking static target lib/librte_timer.a 00:16:33.852 [151/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:16:33.852 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:16:34.120 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:16:34.120 [154/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:16:34.120 [155/264] Linking static target lib/librte_compressdev.a 00:16:34.120 [156/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:16:34.121 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:16:34.393 [158/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:16:34.393 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:16:34.393 [160/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.393 [161/264] Linking static target lib/librte_dmadev.a 00:16:34.393 [162/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:16:34.393 [163/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:16:34.393 [164/264] Linking static target lib/librte_hash.a 00:16:34.393 [165/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:16:34.651 [166/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:16:34.651 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:16:34.651 [168/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:16:34.651 [169/264] Linking static target lib/librte_ethdev.a 00:16:34.651 [170/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:16:34.651 [171/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.651 [172/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.909 [173/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.910 [174/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:16:34.910 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:16:34.910 [176/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:16:34.910 [177/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:16:35.168 [178/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:16:35.168 [179/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:16:35.168 [180/264] Linking static target lib/librte_power.a 00:16:35.168 [181/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:16:35.168 [182/264] Linking static target lib/librte_cryptodev.a 00:16:35.168 [183/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.425 [184/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:16:35.425 [185/264] Linking static target lib/librte_reorder.a 00:16:35.425 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:16:35.425 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:16:35.425 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:16:35.683 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:16:35.683 [190/264] Linking static target lib/librte_security.a 00:16:35.683 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.941 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:16:35.941 [193/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.941 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:16:36.199 [195/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:16:36.199 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:16:36.199 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:16:36.199 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:16:36.457 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:16:36.457 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:16:36.457 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:16:36.457 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:16:36.457 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:16:36.457 [204/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:16:36.457 [205/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:16:36.457 [206/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:16:36.715 [207/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:16:36.715 [208/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:16:36.715 [209/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:16:36.715 [210/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:16:36.715 [211/264] Linking static target drivers/librte_bus_vdev.a 00:16:36.715 [212/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:16:36.715 [213/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:16:36.715 [214/264] Linking static target drivers/librte_bus_pci.a 00:16:36.973 [215/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:36.973 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:16:36.973 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:16:36.973 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:36.973 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:16:36.973 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:16:36.973 [221/264] Linking static target drivers/librte_mempool_ring.a 00:16:36.973 [222/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:16:37.231 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:16:39.131 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:16:41.033 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:16:41.033 [226/264] Linking target lib/librte_eal.so.24.0 00:16:41.033 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:16:41.033 [228/264] Linking target drivers/librte_bus_vdev.so.24.0 00:16:41.033 [229/264] Linking target lib/librte_pci.so.24.0 00:16:41.033 [230/264] Linking target lib/librte_meter.so.24.0 00:16:41.033 [231/264] Linking target lib/librte_dmadev.so.24.0 00:16:41.304 [232/264] Linking target lib/librte_ring.so.24.0 00:16:41.304 [233/264] Linking target lib/librte_timer.so.24.0 00:16:41.304 [234/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:16:41.304 [235/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:16:41.304 [236/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:16:41.304 [237/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:16:41.304 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:16:41.304 [239/264] Linking target lib/librte_mempool.so.24.0 00:16:41.304 [240/264] Linking target lib/librte_rcu.so.24.0 00:16:41.304 [241/264] Linking target drivers/librte_bus_pci.so.24.0 00:16:41.584 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:16:41.584 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:16:41.584 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:16:41.584 [245/264] Linking target lib/librte_mbuf.so.24.0 00:16:41.584 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:16:41.584 [247/264] Linking target lib/librte_reorder.so.24.0 00:16:41.584 [248/264] Linking target lib/librte_compressdev.so.24.0 00:16:41.584 [249/264] Linking target lib/librte_net.so.24.0 00:16:41.843 [250/264] Linking target lib/librte_cryptodev.so.24.0 00:16:41.843 [251/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:41.843 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:16:41.843 [253/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:16:41.843 [254/264] Linking target lib/librte_hash.so.24.0 00:16:41.843 [255/264] Linking target lib/librte_cmdline.so.24.0 00:16:41.843 [256/264] Linking target lib/librte_security.so.24.0 00:16:41.843 [257/264] Linking target lib/librte_ethdev.so.24.0 00:16:42.101 [258/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:16:42.101 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:16:42.102 [260/264] Linking target lib/librte_power.so.24.0 00:16:42.668 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:16:42.668 [262/264] Linking static target lib/librte_vhost.a 00:16:44.573 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:16:44.832 [264/264] Linking target lib/librte_vhost.so.24.0 00:16:44.832 INFO: autodetecting backend as ninja 00:16:44.832 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:16:45.766 CC lib/log/log_flags.o 00:16:45.766 CC lib/log/log.o 00:16:45.766 CC lib/log/log_deprecated.o 00:16:45.766 CC lib/ut_mock/mock.o 00:16:46.023 CC lib/ut/ut.o 00:16:46.023 LIB libspdk_ut_mock.a 00:16:46.023 LIB libspdk_ut.a 00:16:46.023 LIB libspdk_log.a 00:16:46.281 CC lib/ioat/ioat.o 00:16:46.281 CXX lib/trace_parser/trace.o 00:16:46.281 CC lib/util/base64.o 00:16:46.281 CC lib/util/bit_array.o 00:16:46.281 CC lib/util/crc16.o 00:16:46.281 CC lib/util/cpuset.o 00:16:46.281 CC lib/util/crc32.o 00:16:46.281 CC lib/util/crc32c.o 00:16:46.281 CC lib/dma/dma.o 00:16:46.281 CC lib/vfio_user/host/vfio_user_pci.o 00:16:46.598 CC lib/util/crc32_ieee.o 00:16:46.598 CC lib/util/crc64.o 00:16:46.598 LIB libspdk_dma.a 00:16:46.598 CC lib/vfio_user/host/vfio_user.o 00:16:46.598 CC lib/util/dif.o 00:16:46.598 CC lib/util/fd.o 00:16:46.598 CC lib/util/file.o 00:16:46.598 CC lib/util/hexlify.o 00:16:46.598 CC lib/util/iov.o 00:16:46.598 CC lib/util/math.o 00:16:46.858 LIB libspdk_ioat.a 00:16:46.858 CC lib/util/pipe.o 00:16:46.858 CC lib/util/strerror_tls.o 00:16:46.858 LIB libspdk_vfio_user.a 00:16:46.858 CC lib/util/string.o 00:16:46.858 CC lib/util/uuid.o 00:16:46.858 CC lib/util/fd_group.o 00:16:46.858 CC lib/util/xor.o 00:16:46.858 CC lib/util/zipf.o 00:16:47.428 LIB libspdk_util.a 00:16:47.428 CC lib/idxd/idxd.o 00:16:47.428 CC lib/idxd/idxd_user.o 00:16:47.428 CC lib/conf/conf.o 00:16:47.428 CC lib/json/json_parse.o 00:16:47.428 CC lib/json/json_util.o 00:16:47.428 CC lib/env_dpdk/env.o 00:16:47.428 CC lib/vmd/vmd.o 00:16:47.428 CC lib/json/json_write.o 00:16:47.428 CC lib/rdma/common.o 00:16:47.428 LIB libspdk_trace_parser.a 00:16:47.688 CC lib/rdma/rdma_verbs.o 00:16:47.688 LIB libspdk_conf.a 00:16:47.688 CC lib/env_dpdk/memory.o 00:16:47.688 CC lib/env_dpdk/pci.o 00:16:47.688 CC lib/env_dpdk/init.o 00:16:47.688 CC lib/env_dpdk/threads.o 00:16:47.947 CC lib/env_dpdk/pci_ioat.o 00:16:47.947 LIB libspdk_json.a 00:16:47.947 LIB libspdk_rdma.a 00:16:47.947 CC lib/env_dpdk/pci_virtio.o 00:16:47.947 CC lib/vmd/led.o 00:16:47.947 CC lib/env_dpdk/pci_vmd.o 00:16:47.947 CC lib/env_dpdk/pci_idxd.o 00:16:48.206 CC lib/env_dpdk/pci_event.o 00:16:48.206 CC lib/env_dpdk/sigbus_handler.o 00:16:48.206 CC lib/jsonrpc/jsonrpc_server.o 00:16:48.206 LIB libspdk_idxd.a 00:16:48.206 CC lib/env_dpdk/pci_dpdk.o 00:16:48.206 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:16:48.206 CC lib/jsonrpc/jsonrpc_client.o 00:16:48.206 LIB libspdk_vmd.a 00:16:48.206 CC lib/env_dpdk/pci_dpdk_2207.o 00:16:48.206 CC lib/env_dpdk/pci_dpdk_2211.o 00:16:48.206 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:16:48.464 LIB libspdk_jsonrpc.a 00:16:48.722 CC lib/rpc/rpc.o 00:16:48.982 LIB libspdk_rpc.a 00:16:48.982 LIB libspdk_env_dpdk.a 00:16:49.240 CC lib/keyring/keyring.o 00:16:49.240 CC lib/trace/trace.o 00:16:49.240 CC lib/keyring/keyring_rpc.o 00:16:49.240 CC lib/trace/trace_flags.o 00:16:49.240 CC lib/trace/trace_rpc.o 00:16:49.240 CC lib/notify/notify_rpc.o 00:16:49.240 CC lib/notify/notify.o 00:16:49.240 LIB libspdk_notify.a 00:16:49.498 LIB libspdk_keyring.a 00:16:49.498 LIB libspdk_trace.a 00:16:49.756 CC lib/sock/sock.o 00:16:49.756 CC lib/thread/thread.o 00:16:49.756 CC lib/sock/sock_rpc.o 00:16:49.756 CC lib/thread/iobuf.o 00:16:50.015 LIB libspdk_sock.a 00:16:50.273 CC lib/nvme/nvme_ctrlr_cmd.o 00:16:50.273 CC lib/nvme/nvme_ctrlr.o 00:16:50.273 CC lib/nvme/nvme_fabric.o 00:16:50.273 CC lib/nvme/nvme_pcie_common.o 00:16:50.273 CC lib/nvme/nvme_ns.o 00:16:50.273 CC lib/nvme/nvme_ns_cmd.o 00:16:50.273 CC lib/nvme/nvme.o 00:16:50.273 CC lib/nvme/nvme_qpair.o 00:16:50.273 CC lib/nvme/nvme_pcie.o 00:16:51.215 CC lib/nvme/nvme_quirks.o 00:16:51.215 CC lib/nvme/nvme_transport.o 00:16:51.215 CC lib/nvme/nvme_discovery.o 00:16:51.215 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:16:51.215 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:16:51.215 CC lib/nvme/nvme_tcp.o 00:16:51.215 CC lib/nvme/nvme_opal.o 00:16:51.473 CC lib/nvme/nvme_io_msg.o 00:16:51.473 CC lib/nvme/nvme_poll_group.o 00:16:51.473 LIB libspdk_thread.a 00:16:51.473 CC lib/nvme/nvme_zns.o 00:16:51.473 CC lib/nvme/nvme_stubs.o 00:16:51.473 CC lib/nvme/nvme_auth.o 00:16:51.731 CC lib/nvme/nvme_cuse.o 00:16:51.731 CC lib/accel/accel.o 00:16:51.989 CC lib/nvme/nvme_rdma.o 00:16:51.989 CC lib/accel/accel_rpc.o 00:16:51.989 CC lib/accel/accel_sw.o 00:16:51.989 CC lib/blob/blobstore.o 00:16:52.249 CC lib/init/json_config.o 00:16:52.249 CC lib/virtio/virtio.o 00:16:52.249 CC lib/virtio/virtio_vhost_user.o 00:16:52.249 CC lib/virtio/virtio_vfio_user.o 00:16:52.249 CC lib/virtio/virtio_pci.o 00:16:52.507 CC lib/init/subsystem.o 00:16:52.507 CC lib/init/subsystem_rpc.o 00:16:52.507 CC lib/init/rpc.o 00:16:52.507 CC lib/blob/request.o 00:16:52.507 CC lib/blob/zeroes.o 00:16:52.765 CC lib/blob/blob_bs_dev.o 00:16:52.765 LIB libspdk_virtio.a 00:16:52.765 LIB libspdk_init.a 00:16:53.025 CC lib/event/log_rpc.o 00:16:53.025 CC lib/event/app.o 00:16:53.025 CC lib/event/reactor.o 00:16:53.025 CC lib/event/app_rpc.o 00:16:53.025 CC lib/event/scheduler_static.o 00:16:53.025 LIB libspdk_accel.a 00:16:53.284 CC lib/bdev/bdev.o 00:16:53.284 CC lib/bdev/bdev_zone.o 00:16:53.284 CC lib/bdev/bdev_rpc.o 00:16:53.284 CC lib/bdev/part.o 00:16:53.284 CC lib/bdev/scsi_nvme.o 00:16:53.541 LIB libspdk_event.a 00:16:53.541 LIB libspdk_nvme.a 00:16:56.074 LIB libspdk_blob.a 00:16:56.074 CC lib/lvol/lvol.o 00:16:56.074 CC lib/blobfs/blobfs.o 00:16:56.074 CC lib/blobfs/tree.o 00:16:56.074 LIB libspdk_bdev.a 00:16:56.333 CC lib/nvmf/ctrlr_discovery.o 00:16:56.333 CC lib/nvmf/ctrlr.o 00:16:56.333 CC lib/nvmf/ctrlr_bdev.o 00:16:56.333 CC lib/nvmf/subsystem.o 00:16:56.333 CC lib/nvmf/nvmf.o 00:16:56.333 CC lib/ftl/ftl_core.o 00:16:56.333 CC lib/nbd/nbd.o 00:16:56.333 CC lib/scsi/dev.o 00:16:56.592 CC lib/scsi/lun.o 00:16:56.592 CC lib/ftl/ftl_init.o 00:16:56.851 CC lib/nbd/nbd_rpc.o 00:16:56.851 LIB libspdk_blobfs.a 00:16:56.851 CC lib/ftl/ftl_layout.o 00:16:56.851 CC lib/ftl/ftl_debug.o 00:16:56.851 CC lib/ftl/ftl_io.o 00:16:57.110 LIB libspdk_nbd.a 00:16:57.110 CC lib/scsi/port.o 00:16:57.110 LIB libspdk_lvol.a 00:16:57.110 CC lib/scsi/scsi.o 00:16:57.110 CC lib/ftl/ftl_sb.o 00:16:57.110 CC lib/scsi/scsi_bdev.o 00:16:57.369 CC lib/nvmf/nvmf_rpc.o 00:16:57.369 CC lib/scsi/scsi_pr.o 00:16:57.369 CC lib/scsi/scsi_rpc.o 00:16:57.369 CC lib/scsi/task.o 00:16:57.369 CC lib/ftl/ftl_l2p.o 00:16:57.369 CC lib/ftl/ftl_l2p_flat.o 00:16:57.369 CC lib/ftl/ftl_nv_cache.o 00:16:57.369 CC lib/ftl/ftl_band.o 00:16:57.628 CC lib/ftl/ftl_band_ops.o 00:16:57.628 CC lib/ftl/ftl_writer.o 00:16:57.628 CC lib/nvmf/tcp.o 00:16:57.628 CC lib/nvmf/transport.o 00:16:57.628 CC lib/nvmf/stubs.o 00:16:57.628 LIB libspdk_scsi.a 00:16:57.886 CC lib/nvmf/rdma.o 00:16:57.886 CC lib/ftl/ftl_rq.o 00:16:57.886 CC lib/ftl/ftl_reloc.o 00:16:58.145 CC lib/iscsi/conn.o 00:16:58.145 CC lib/iscsi/init_grp.o 00:16:58.145 CC lib/iscsi/iscsi.o 00:16:58.404 CC lib/ftl/ftl_l2p_cache.o 00:16:58.404 CC lib/iscsi/md5.o 00:16:58.404 CC lib/ftl/ftl_p2l.o 00:16:58.404 CC lib/vhost/vhost.o 00:16:58.662 CC lib/vhost/vhost_rpc.o 00:16:58.662 CC lib/iscsi/param.o 00:16:58.662 CC lib/vhost/vhost_scsi.o 00:16:58.921 CC lib/vhost/vhost_blk.o 00:16:58.921 CC lib/ftl/mngt/ftl_mngt.o 00:16:58.921 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:16:58.921 CC lib/vhost/rte_vhost_user.o 00:16:59.183 CC lib/iscsi/portal_grp.o 00:16:59.183 CC lib/iscsi/tgt_node.o 00:16:59.183 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:16:59.183 CC lib/ftl/mngt/ftl_mngt_startup.o 00:16:59.441 CC lib/ftl/mngt/ftl_mngt_md.o 00:16:59.441 CC lib/iscsi/iscsi_subsystem.o 00:16:59.441 CC lib/iscsi/iscsi_rpc.o 00:16:59.441 CC lib/ftl/mngt/ftl_mngt_misc.o 00:16:59.700 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:16:59.700 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:16:59.700 CC lib/iscsi/task.o 00:16:59.700 CC lib/ftl/mngt/ftl_mngt_band.o 00:16:59.958 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:16:59.958 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:16:59.958 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:16:59.958 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:16:59.958 CC lib/ftl/utils/ftl_conf.o 00:16:59.958 CC lib/ftl/utils/ftl_md.o 00:16:59.958 LIB libspdk_iscsi.a 00:16:59.958 LIB libspdk_vhost.a 00:16:59.958 CC lib/ftl/utils/ftl_mempool.o 00:16:59.958 CC lib/ftl/utils/ftl_bitmap.o 00:17:00.216 CC lib/ftl/utils/ftl_property.o 00:17:00.216 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:17:00.216 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:17:00.216 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:17:00.216 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:17:00.216 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:17:00.216 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:17:00.216 CC lib/ftl/upgrade/ftl_sb_v3.o 00:17:00.475 LIB libspdk_nvmf.a 00:17:00.475 CC lib/ftl/upgrade/ftl_sb_v5.o 00:17:00.475 CC lib/ftl/nvc/ftl_nvc_dev.o 00:17:00.475 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:17:00.475 CC lib/ftl/base/ftl_base_dev.o 00:17:00.475 CC lib/ftl/base/ftl_base_bdev.o 00:17:00.475 CC lib/ftl/ftl_trace.o 00:17:00.733 LIB libspdk_ftl.a 00:17:01.299 CC module/env_dpdk/env_dpdk_rpc.o 00:17:01.299 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:17:01.299 CC module/scheduler/gscheduler/gscheduler.o 00:17:01.299 CC module/blob/bdev/blob_bdev.o 00:17:01.299 CC module/sock/posix/posix.o 00:17:01.299 CC module/accel/ioat/accel_ioat.o 00:17:01.299 CC module/accel/dsa/accel_dsa.o 00:17:01.299 CC module/scheduler/dynamic/scheduler_dynamic.o 00:17:01.299 CC module/keyring/file/keyring.o 00:17:01.299 CC module/accel/error/accel_error.o 00:17:01.299 LIB libspdk_env_dpdk_rpc.a 00:17:01.299 LIB libspdk_scheduler_gscheduler.a 00:17:01.299 CC module/accel/dsa/accel_dsa_rpc.o 00:17:01.299 LIB libspdk_scheduler_dpdk_governor.a 00:17:01.299 CC module/keyring/file/keyring_rpc.o 00:17:01.299 CC module/accel/ioat/accel_ioat_rpc.o 00:17:01.557 LIB libspdk_scheduler_dynamic.a 00:17:01.557 CC module/accel/error/accel_error_rpc.o 00:17:01.557 LIB libspdk_accel_dsa.a 00:17:01.557 LIB libspdk_blob_bdev.a 00:17:01.557 LIB libspdk_accel_ioat.a 00:17:01.557 LIB libspdk_keyring_file.a 00:17:01.557 CC module/accel/iaa/accel_iaa.o 00:17:01.557 CC module/accel/iaa/accel_iaa_rpc.o 00:17:01.557 LIB libspdk_accel_error.a 00:17:01.557 CC module/keyring/linux/keyring.o 00:17:01.557 CC module/keyring/linux/keyring_rpc.o 00:17:01.815 CC module/blobfs/bdev/blobfs_bdev.o 00:17:01.815 CC module/bdev/error/vbdev_error.o 00:17:01.815 CC module/bdev/lvol/vbdev_lvol.o 00:17:01.815 CC module/bdev/delay/vbdev_delay.o 00:17:01.815 CC module/bdev/gpt/gpt.o 00:17:01.815 LIB libspdk_accel_iaa.a 00:17:01.815 CC module/bdev/gpt/vbdev_gpt.o 00:17:01.815 LIB libspdk_keyring_linux.a 00:17:01.815 CC module/bdev/delay/vbdev_delay_rpc.o 00:17:01.815 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:17:01.815 CC module/bdev/malloc/bdev_malloc.o 00:17:02.074 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:17:02.074 CC module/bdev/malloc/bdev_malloc_rpc.o 00:17:02.074 LIB libspdk_sock_posix.a 00:17:02.074 CC module/bdev/error/vbdev_error_rpc.o 00:17:02.074 LIB libspdk_blobfs_bdev.a 00:17:02.074 LIB libspdk_bdev_gpt.a 00:17:02.074 CC module/bdev/null/bdev_null.o 00:17:02.074 CC module/bdev/null/bdev_null_rpc.o 00:17:02.334 LIB libspdk_bdev_delay.a 00:17:02.334 LIB libspdk_bdev_error.a 00:17:02.334 CC module/bdev/nvme/bdev_nvme.o 00:17:02.334 CC module/bdev/passthru/vbdev_passthru.o 00:17:02.334 LIB libspdk_bdev_lvol.a 00:17:02.334 CC module/bdev/raid/bdev_raid.o 00:17:02.334 LIB libspdk_bdev_malloc.a 00:17:02.334 CC module/bdev/raid/bdev_raid_rpc.o 00:17:02.334 CC module/bdev/raid/bdev_raid_sb.o 00:17:02.334 CC module/bdev/split/vbdev_split.o 00:17:02.334 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:17:02.334 CC module/bdev/zone_block/vbdev_zone_block.o 00:17:02.334 LIB libspdk_bdev_null.a 00:17:02.642 CC module/bdev/aio/bdev_aio.o 00:17:02.642 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:17:02.642 CC module/bdev/raid/raid0.o 00:17:02.642 CC module/bdev/split/vbdev_split_rpc.o 00:17:02.642 CC module/bdev/ftl/bdev_ftl.o 00:17:02.642 CC module/bdev/raid/raid1.o 00:17:02.643 LIB libspdk_bdev_passthru.a 00:17:02.948 CC module/bdev/raid/concat.o 00:17:02.948 CC module/bdev/raid/raid5f.o 00:17:02.948 LIB libspdk_bdev_split.a 00:17:02.948 LIB libspdk_bdev_zone_block.a 00:17:02.948 CC module/bdev/aio/bdev_aio_rpc.o 00:17:02.948 CC module/bdev/nvme/bdev_nvme_rpc.o 00:17:02.948 CC module/bdev/nvme/nvme_rpc.o 00:17:02.948 CC module/bdev/ftl/bdev_ftl_rpc.o 00:17:02.948 CC module/bdev/nvme/bdev_mdns_client.o 00:17:02.948 CC module/bdev/iscsi/bdev_iscsi.o 00:17:02.948 LIB libspdk_bdev_aio.a 00:17:02.948 CC module/bdev/virtio/bdev_virtio_scsi.o 00:17:03.207 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:17:03.207 CC module/bdev/nvme/vbdev_opal.o 00:17:03.207 LIB libspdk_bdev_ftl.a 00:17:03.207 CC module/bdev/nvme/vbdev_opal_rpc.o 00:17:03.207 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:17:03.207 CC module/bdev/virtio/bdev_virtio_blk.o 00:17:03.465 CC module/bdev/virtio/bdev_virtio_rpc.o 00:17:03.465 LIB libspdk_bdev_iscsi.a 00:17:03.465 LIB libspdk_bdev_raid.a 00:17:03.724 LIB libspdk_bdev_virtio.a 00:17:04.661 LIB libspdk_bdev_nvme.a 00:17:05.227 CC module/event/subsystems/sock/sock.o 00:17:05.227 CC module/event/subsystems/vmd/vmd_rpc.o 00:17:05.227 CC module/event/subsystems/vmd/vmd.o 00:17:05.227 CC module/event/subsystems/scheduler/scheduler.o 00:17:05.227 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:17:05.227 CC module/event/subsystems/iobuf/iobuf.o 00:17:05.227 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:17:05.227 CC module/event/subsystems/keyring/keyring.o 00:17:05.227 LIB libspdk_event_sock.a 00:17:05.227 LIB libspdk_event_vhost_blk.a 00:17:05.227 LIB libspdk_event_scheduler.a 00:17:05.227 LIB libspdk_event_keyring.a 00:17:05.227 LIB libspdk_event_iobuf.a 00:17:05.227 LIB libspdk_event_vmd.a 00:17:05.486 CC module/event/subsystems/accel/accel.o 00:17:05.745 LIB libspdk_event_accel.a 00:17:06.003 CC module/event/subsystems/bdev/bdev.o 00:17:06.003 LIB libspdk_event_bdev.a 00:17:06.262 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:17:06.262 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:17:06.262 CC module/event/subsystems/scsi/scsi.o 00:17:06.262 CC module/event/subsystems/nbd/nbd.o 00:17:06.521 LIB libspdk_event_scsi.a 00:17:06.521 LIB libspdk_event_nbd.a 00:17:06.522 LIB libspdk_event_nvmf.a 00:17:06.782 CC module/event/subsystems/iscsi/iscsi.o 00:17:06.782 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:17:06.782 LIB libspdk_event_vhost_scsi.a 00:17:06.782 LIB libspdk_event_iscsi.a 00:17:07.042 CXX app/trace/trace.o 00:17:07.042 CC app/trace_record/trace_record.o 00:17:07.042 CC app/nvmf_tgt/nvmf_main.o 00:17:07.042 CC examples/accel/perf/accel_perf.o 00:17:07.042 CC examples/ioat/perf/perf.o 00:17:07.301 CC examples/sock/hello_world/hello_sock.o 00:17:07.301 CC examples/nvme/hello_world/hello_world.o 00:17:07.301 CC test/accel/dif/dif.o 00:17:07.301 CC examples/blob/hello_world/hello_blob.o 00:17:07.301 CC examples/bdev/hello_world/hello_bdev.o 00:17:07.301 LINK spdk_trace_record 00:17:07.301 LINK ioat_perf 00:17:07.301 LINK nvmf_tgt 00:17:07.560 LINK hello_sock 00:17:07.560 LINK hello_world 00:17:07.560 LINK hello_bdev 00:17:07.560 LINK hello_blob 00:17:07.560 LINK spdk_trace 00:17:07.819 LINK dif 00:17:07.819 LINK accel_perf 00:17:08.079 CC examples/bdev/bdevperf/bdevperf.o 00:17:08.079 CC examples/ioat/verify/verify.o 00:17:08.079 CC examples/nvme/reconnect/reconnect.o 00:17:08.337 LINK verify 00:17:08.337 CC examples/nvme/nvme_manage/nvme_manage.o 00:17:08.337 LINK reconnect 00:17:08.904 CC test/app/bdev_svc/bdev_svc.o 00:17:08.904 LINK bdevperf 00:17:08.904 LINK nvme_manage 00:17:08.904 CC examples/nvme/arbitration/arbitration.o 00:17:08.904 LINK bdev_svc 00:17:08.904 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:17:09.163 LINK arbitration 00:17:09.421 LINK nvme_fuzz 00:17:09.679 CC test/bdev/bdevio/bdevio.o 00:17:10.245 LINK bdevio 00:17:10.245 CC test/blobfs/mkfs/mkfs.o 00:17:10.503 LINK mkfs 00:17:10.503 CC examples/nvme/hotplug/hotplug.o 00:17:10.503 CC examples/blob/cli/blobcli.o 00:17:10.503 CC examples/nvme/cmb_copy/cmb_copy.o 00:17:10.761 LINK hotplug 00:17:10.761 CC examples/nvme/abort/abort.o 00:17:10.761 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:17:10.761 LINK cmb_copy 00:17:10.761 CC app/iscsi_tgt/iscsi_tgt.o 00:17:11.018 LINK iscsi_tgt 00:17:11.018 LINK blobcli 00:17:11.018 LINK abort 00:17:11.951 CC app/spdk_tgt/spdk_tgt.o 00:17:11.951 CC app/spdk_lspci/spdk_lspci.o 00:17:11.951 CC app/spdk_nvme_perf/perf.o 00:17:11.951 TEST_HEADER include/spdk/ioat.h 00:17:11.951 TEST_HEADER include/spdk/blobfs.h 00:17:11.951 TEST_HEADER include/spdk/notify.h 00:17:11.951 TEST_HEADER include/spdk/pipe.h 00:17:11.951 TEST_HEADER include/spdk/accel.h 00:17:11.951 TEST_HEADER include/spdk/file.h 00:17:11.951 TEST_HEADER include/spdk/version.h 00:17:11.951 TEST_HEADER include/spdk/trace_parser.h 00:17:12.209 TEST_HEADER include/spdk/opal_spec.h 00:17:12.209 TEST_HEADER include/spdk/uuid.h 00:17:12.209 TEST_HEADER include/spdk/likely.h 00:17:12.209 TEST_HEADER include/spdk/dif.h 00:17:12.209 TEST_HEADER include/spdk/keyring_module.h 00:17:12.209 TEST_HEADER include/spdk/memory.h 00:17:12.209 TEST_HEADER include/spdk/vfio_user_pci.h 00:17:12.209 TEST_HEADER include/spdk/dma.h 00:17:12.209 TEST_HEADER include/spdk/nbd.h 00:17:12.209 TEST_HEADER include/spdk/conf.h 00:17:12.209 TEST_HEADER include/spdk/env_dpdk.h 00:17:12.209 TEST_HEADER include/spdk/nvmf_spec.h 00:17:12.209 TEST_HEADER include/spdk/iscsi_spec.h 00:17:12.209 TEST_HEADER include/spdk/mmio.h 00:17:12.209 TEST_HEADER include/spdk/json.h 00:17:12.209 TEST_HEADER include/spdk/opal.h 00:17:12.209 TEST_HEADER include/spdk/bdev.h 00:17:12.209 TEST_HEADER include/spdk/keyring.h 00:17:12.209 TEST_HEADER include/spdk/base64.h 00:17:12.209 TEST_HEADER include/spdk/blobfs_bdev.h 00:17:12.209 TEST_HEADER include/spdk/nvme_ocssd.h 00:17:12.209 TEST_HEADER include/spdk/fd.h 00:17:12.209 TEST_HEADER include/spdk/barrier.h 00:17:12.209 TEST_HEADER include/spdk/scsi_spec.h 00:17:12.209 TEST_HEADER include/spdk/zipf.h 00:17:12.209 TEST_HEADER include/spdk/nvmf.h 00:17:12.209 TEST_HEADER include/spdk/queue.h 00:17:12.209 TEST_HEADER include/spdk/xor.h 00:17:12.209 TEST_HEADER include/spdk/cpuset.h 00:17:12.209 TEST_HEADER include/spdk/thread.h 00:17:12.209 TEST_HEADER include/spdk/bdev_zone.h 00:17:12.209 TEST_HEADER include/spdk/fd_group.h 00:17:12.209 TEST_HEADER include/spdk/tree.h 00:17:12.209 TEST_HEADER include/spdk/blob_bdev.h 00:17:12.209 TEST_HEADER include/spdk/crc64.h 00:17:12.209 TEST_HEADER include/spdk/assert.h 00:17:12.209 TEST_HEADER include/spdk/nvme_spec.h 00:17:12.209 TEST_HEADER include/spdk/endian.h 00:17:12.209 TEST_HEADER include/spdk/pci_ids.h 00:17:12.209 TEST_HEADER include/spdk/log.h 00:17:12.209 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:17:12.209 TEST_HEADER include/spdk/ftl.h 00:17:12.209 TEST_HEADER include/spdk/config.h 00:17:12.209 TEST_HEADER include/spdk/vhost.h 00:17:12.209 TEST_HEADER include/spdk/bdev_module.h 00:17:12.209 TEST_HEADER include/spdk/nvme_intel.h 00:17:12.209 LINK spdk_lspci 00:17:12.209 TEST_HEADER include/spdk/idxd_spec.h 00:17:12.209 TEST_HEADER include/spdk/crc16.h 00:17:12.209 TEST_HEADER include/spdk/nvme.h 00:17:12.209 TEST_HEADER include/spdk/stdinc.h 00:17:12.209 TEST_HEADER include/spdk/scsi.h 00:17:12.209 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:17:12.209 TEST_HEADER include/spdk/idxd.h 00:17:12.209 TEST_HEADER include/spdk/hexlify.h 00:17:12.209 TEST_HEADER include/spdk/reduce.h 00:17:12.209 TEST_HEADER include/spdk/crc32.h 00:17:12.209 LINK spdk_tgt 00:17:12.209 TEST_HEADER include/spdk/init.h 00:17:12.209 TEST_HEADER include/spdk/nvmf_transport.h 00:17:12.209 TEST_HEADER include/spdk/nvme_zns.h 00:17:12.209 TEST_HEADER include/spdk/vfio_user_spec.h 00:17:12.209 TEST_HEADER include/spdk/util.h 00:17:12.209 TEST_HEADER include/spdk/jsonrpc.h 00:17:12.209 TEST_HEADER include/spdk/env.h 00:17:12.209 TEST_HEADER include/spdk/nvmf_cmd.h 00:17:12.209 TEST_HEADER include/spdk/lvol.h 00:17:12.209 TEST_HEADER include/spdk/histogram_data.h 00:17:12.209 TEST_HEADER include/spdk/event.h 00:17:12.209 TEST_HEADER include/spdk/trace.h 00:17:12.209 TEST_HEADER include/spdk/ioat_spec.h 00:17:12.209 TEST_HEADER include/spdk/string.h 00:17:12.209 TEST_HEADER include/spdk/ublk.h 00:17:12.209 TEST_HEADER include/spdk/bit_array.h 00:17:12.209 TEST_HEADER include/spdk/scheduler.h 00:17:12.209 TEST_HEADER include/spdk/blob.h 00:17:12.209 TEST_HEADER include/spdk/gpt_spec.h 00:17:12.209 TEST_HEADER include/spdk/sock.h 00:17:12.209 TEST_HEADER include/spdk/vmd.h 00:17:12.209 TEST_HEADER include/spdk/rpc.h 00:17:12.209 TEST_HEADER include/spdk/accel_module.h 00:17:12.209 TEST_HEADER include/spdk/bit_pool.h 00:17:12.209 CXX test/cpp_headers/ioat.o 00:17:12.467 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:17:12.467 CXX test/cpp_headers/blobfs.o 00:17:12.467 LINK pmr_persistence 00:17:12.726 CXX test/cpp_headers/notify.o 00:17:12.726 LINK iscsi_fuzz 00:17:12.727 CXX test/cpp_headers/pipe.o 00:17:12.986 CXX test/cpp_headers/accel.o 00:17:13.244 LINK spdk_nvme_perf 00:17:13.244 CXX test/cpp_headers/file.o 00:17:13.244 CXX test/cpp_headers/version.o 00:17:13.244 CXX test/cpp_headers/trace_parser.o 00:17:13.502 CXX test/cpp_headers/opal_spec.o 00:17:13.760 CC test/event/event_perf/event_perf.o 00:17:13.760 CC test/dma/test_dma/test_dma.o 00:17:14.074 CXX test/cpp_headers/uuid.o 00:17:14.074 CC test/env/mem_callbacks/mem_callbacks.o 00:17:14.074 LINK event_perf 00:17:14.074 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:17:14.074 CC test/lvol/esnap/esnap.o 00:17:14.334 CXX test/cpp_headers/likely.o 00:17:14.334 LINK test_dma 00:17:14.593 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:17:14.593 CXX test/cpp_headers/dif.o 00:17:14.593 CC examples/vmd/lsvmd/lsvmd.o 00:17:14.593 CC app/spdk_nvme_identify/identify.o 00:17:14.593 LINK mem_callbacks 00:17:14.857 CC test/env/vtophys/vtophys.o 00:17:14.857 LINK lsvmd 00:17:14.857 CC test/event/reactor/reactor.o 00:17:14.857 CXX test/cpp_headers/keyring_module.o 00:17:14.857 LINK vtophys 00:17:15.115 LINK reactor 00:17:15.115 LINK vhost_fuzz 00:17:15.115 CXX test/cpp_headers/memory.o 00:17:15.115 CC test/event/reactor_perf/reactor_perf.o 00:17:15.374 CXX test/cpp_headers/vfio_user_pci.o 00:17:15.374 LINK reactor_perf 00:17:15.374 CXX test/cpp_headers/dma.o 00:17:15.633 LINK spdk_nvme_identify 00:17:15.633 CXX test/cpp_headers/nbd.o 00:17:15.633 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:17:15.633 CXX test/cpp_headers/conf.o 00:17:15.891 CC test/env/memory/memory_ut.o 00:17:15.891 LINK env_dpdk_post_init 00:17:15.891 CXX test/cpp_headers/env_dpdk.o 00:17:15.891 CC examples/vmd/led/led.o 00:17:15.892 CC test/app/histogram_perf/histogram_perf.o 00:17:16.150 CXX test/cpp_headers/nvmf_spec.o 00:17:16.150 CXX test/cpp_headers/iscsi_spec.o 00:17:16.150 CC test/event/app_repeat/app_repeat.o 00:17:16.150 LINK led 00:17:16.150 LINK histogram_perf 00:17:16.150 CXX test/cpp_headers/mmio.o 00:17:16.408 CC test/env/pci/pci_ut.o 00:17:16.408 LINK app_repeat 00:17:16.408 CXX test/cpp_headers/json.o 00:17:16.408 LINK memory_ut 00:17:16.667 CXX test/cpp_headers/opal.o 00:17:16.667 CC app/spdk_nvme_discover/discovery_aer.o 00:17:16.667 CXX test/cpp_headers/bdev.o 00:17:16.667 LINK pci_ut 00:17:16.965 LINK spdk_nvme_discover 00:17:16.965 CXX test/cpp_headers/keyring.o 00:17:16.965 CC app/spdk_top/spdk_top.o 00:17:16.965 CC test/app/jsoncat/jsoncat.o 00:17:16.965 CXX test/cpp_headers/base64.o 00:17:17.223 LINK jsoncat 00:17:17.223 CXX test/cpp_headers/blobfs_bdev.o 00:17:17.223 CC app/vhost/vhost.o 00:17:17.481 CC app/spdk_dd/spdk_dd.o 00:17:17.481 CXX test/cpp_headers/nvme_ocssd.o 00:17:17.481 CXX test/cpp_headers/fd.o 00:17:17.481 LINK vhost 00:17:17.481 CC examples/nvmf/nvmf/nvmf.o 00:17:17.481 CC test/event/scheduler/scheduler.o 00:17:17.763 CXX test/cpp_headers/barrier.o 00:17:17.763 LINK spdk_dd 00:17:17.763 CC app/fio/nvme/fio_plugin.o 00:17:17.763 CXX test/cpp_headers/scsi_spec.o 00:17:17.763 LINK scheduler 00:17:17.763 CC test/app/stub/stub.o 00:17:17.763 LINK nvmf 00:17:18.022 LINK spdk_top 00:17:18.022 CXX test/cpp_headers/zipf.o 00:17:18.022 CXX test/cpp_headers/nvmf.o 00:17:18.022 LINK stub 00:17:18.281 CC app/fio/bdev/fio_plugin.o 00:17:18.281 CXX test/cpp_headers/queue.o 00:17:18.281 CXX test/cpp_headers/xor.o 00:17:18.539 LINK spdk_nvme 00:17:18.539 CXX test/cpp_headers/cpuset.o 00:17:18.539 CC test/nvme/aer/aer.o 00:17:18.798 CXX test/cpp_headers/thread.o 00:17:18.798 CXX test/cpp_headers/bdev_zone.o 00:17:18.798 LINK spdk_bdev 00:17:19.056 LINK aer 00:17:19.056 CXX test/cpp_headers/fd_group.o 00:17:19.315 CXX test/cpp_headers/tree.o 00:17:19.315 CXX test/cpp_headers/blob_bdev.o 00:17:19.315 CXX test/cpp_headers/crc64.o 00:17:19.574 CC test/nvme/reset/reset.o 00:17:19.574 CXX test/cpp_headers/assert.o 00:17:19.574 CXX test/cpp_headers/nvme_spec.o 00:17:19.574 LINK esnap 00:17:19.833 CXX test/cpp_headers/endian.o 00:17:19.833 LINK reset 00:17:19.833 CC test/nvme/sgl/sgl.o 00:17:20.091 CXX test/cpp_headers/pci_ids.o 00:17:20.091 CXX test/cpp_headers/log.o 00:17:20.091 CXX test/cpp_headers/nvme_ocssd_spec.o 00:17:20.091 LINK sgl 00:17:20.349 CC test/nvme/e2edp/nvme_dp.o 00:17:20.349 CC test/nvme/overhead/overhead.o 00:17:20.608 CXX test/cpp_headers/ftl.o 00:17:20.608 LINK nvme_dp 00:17:20.608 CXX test/cpp_headers/config.o 00:17:20.867 LINK overhead 00:17:20.867 CC test/rpc_client/rpc_client_test.o 00:17:20.867 CXX test/cpp_headers/vhost.o 00:17:20.867 CXX test/cpp_headers/bdev_module.o 00:17:21.126 CXX test/cpp_headers/nvme_intel.o 00:17:21.126 CC examples/util/zipf/zipf.o 00:17:21.127 LINK rpc_client_test 00:17:21.127 CXX test/cpp_headers/idxd_spec.o 00:17:21.127 CXX test/cpp_headers/crc16.o 00:17:21.127 CC examples/thread/thread/thread_ex.o 00:17:21.386 LINK zipf 00:17:21.386 CXX test/cpp_headers/nvme.o 00:17:21.386 CC examples/idxd/perf/perf.o 00:17:21.386 CC examples/interrupt_tgt/interrupt_tgt.o 00:17:21.644 CC test/thread/poller_perf/poller_perf.o 00:17:21.644 CXX test/cpp_headers/stdinc.o 00:17:21.644 LINK thread 00:17:21.644 LINK interrupt_tgt 00:17:21.903 CXX test/cpp_headers/scsi.o 00:17:21.903 CC test/thread/lock/spdk_lock.o 00:17:21.903 LINK poller_perf 00:17:21.903 CC test/nvme/err_injection/err_injection.o 00:17:21.903 LINK idxd_perf 00:17:21.903 CXX test/cpp_headers/nvmf_fc_spec.o 00:17:22.161 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:17:22.161 LINK err_injection 00:17:22.161 CC test/unit/lib/accel/accel.c/accel_ut.o 00:17:22.161 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:17:22.161 CXX test/cpp_headers/idxd.o 00:17:22.161 LINK histogram_ut 00:17:22.420 CXX test/cpp_headers/hexlify.o 00:17:22.420 CXX test/cpp_headers/reduce.o 00:17:22.420 CXX test/cpp_headers/crc32.o 00:17:22.678 CXX test/cpp_headers/init.o 00:17:22.678 CC test/nvme/startup/startup.o 00:17:22.678 CC test/nvme/reserve/reserve.o 00:17:22.678 CXX test/cpp_headers/nvmf_transport.o 00:17:22.678 LINK startup 00:17:22.938 CC test/nvme/simple_copy/simple_copy.o 00:17:22.938 LINK reserve 00:17:22.938 CXX test/cpp_headers/nvme_zns.o 00:17:23.196 LINK simple_copy 00:17:23.196 CXX test/cpp_headers/vfio_user_spec.o 00:17:23.454 CXX test/cpp_headers/util.o 00:17:23.454 CXX test/cpp_headers/jsonrpc.o 00:17:23.712 CC test/unit/lib/bdev/part.c/part_ut.o 00:17:23.712 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:17:23.712 LINK spdk_lock 00:17:23.712 CXX test/cpp_headers/env.o 00:17:23.970 CXX test/cpp_headers/nvmf_cmd.o 00:17:23.970 CXX test/cpp_headers/lvol.o 00:17:24.266 CXX test/cpp_headers/histogram_data.o 00:17:24.562 CXX test/cpp_headers/event.o 00:17:24.562 LINK scsi_nvme_ut 00:17:24.562 CC test/nvme/connect_stress/connect_stress.o 00:17:24.562 CC test/nvme/boot_partition/boot_partition.o 00:17:24.562 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:17:24.562 CXX test/cpp_headers/trace.o 00:17:24.562 LINK connect_stress 00:17:24.562 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:17:24.819 LINK boot_partition 00:17:24.819 CXX test/cpp_headers/ioat_spec.o 00:17:24.819 LINK accel_ut 00:17:24.819 CXX test/cpp_headers/string.o 00:17:24.819 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:17:25.078 CXX test/cpp_headers/ublk.o 00:17:25.078 LINK gpt_ut 00:17:25.078 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:17:25.078 CXX test/cpp_headers/bit_array.o 00:17:25.336 CXX test/cpp_headers/scheduler.o 00:17:25.336 LINK tree_ut 00:17:25.336 CC test/unit/lib/event/app.c/app_ut.o 00:17:25.336 LINK blob_bdev_ut 00:17:25.593 CC test/unit/lib/dma/dma.c/dma_ut.o 00:17:25.593 CXX test/cpp_headers/blob.o 00:17:25.593 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:17:25.851 CXX test/cpp_headers/gpt_spec.o 00:17:25.851 CC test/unit/lib/blob/blob.c/blob_ut.o 00:17:25.851 LINK dma_ut 00:17:25.851 CXX test/cpp_headers/sock.o 00:17:25.851 LINK vbdev_lvol_ut 00:17:26.111 CC test/nvme/compliance/nvme_compliance.o 00:17:26.111 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:17:26.111 LINK app_ut 00:17:26.111 CXX test/cpp_headers/vmd.o 00:17:26.111 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:17:26.369 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:17:26.369 CXX test/cpp_headers/rpc.o 00:17:26.369 LINK nvme_compliance 00:17:26.628 LINK ioat_ut 00:17:26.628 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:17:26.628 CXX test/cpp_headers/accel_module.o 00:17:26.628 LINK init_grp_ut 00:17:26.628 CXX test/cpp_headers/bit_pool.o 00:17:26.886 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:17:27.145 CC test/nvme/fused_ordering/fused_ordering.o 00:17:27.145 LINK blobfs_async_ut 00:17:27.145 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:17:27.145 LINK fused_ordering 00:17:27.404 LINK reactor_ut 00:17:27.404 LINK part_ut 00:17:27.663 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:17:27.663 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:17:27.922 LINK conn_ut 00:17:27.922 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:17:27.922 LINK blobfs_bdev_ut 00:17:28.181 CC test/unit/lib/log/log.c/log_ut.o 00:17:28.181 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:17:28.181 LINK jsonrpc_server_ut 00:17:28.440 CC test/unit/lib/iscsi/param.c/param_ut.o 00:17:28.440 LINK log_ut 00:17:28.440 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:17:28.440 CC test/nvme/doorbell_aers/doorbell_aers.o 00:17:28.698 LINK bdev_ut 00:17:28.698 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:17:28.698 LINK doorbell_aers 00:17:28.956 LINK param_ut 00:17:28.956 CC test/nvme/fdp/fdp.o 00:17:29.275 LINK portal_grp_ut 00:17:29.275 LINK blobfs_sync_ut 00:17:29.275 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:17:29.533 LINK fdp 00:17:29.533 LINK tgt_node_ut 00:17:29.791 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:17:29.791 CC test/unit/lib/notify/notify.c/notify_ut.o 00:17:30.051 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:17:30.051 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:17:30.051 LINK notify_ut 00:17:30.309 LINK json_parse_ut 00:17:30.309 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:17:30.567 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:17:30.825 LINK iscsi_ut 00:17:30.825 CC test/nvme/cuse/cuse.o 00:17:30.825 LINK bdev_ut 00:17:31.084 LINK lvol_ut 00:17:31.084 LINK json_util_ut 00:17:31.084 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:17:31.341 LINK nvme_ut 00:17:31.341 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:17:31.341 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:17:31.599 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:17:31.599 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:17:31.857 LINK cuse 00:17:31.857 LINK nvme_ctrlr_cmd_ut 00:17:32.115 LINK bdev_zone_ut 00:17:32.115 LINK bdev_raid_ut 00:17:32.373 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:17:32.373 LINK nvme_ns_ut 00:17:32.373 LINK nvme_ctrlr_ocssd_cmd_ut 00:17:32.373 LINK json_write_ut 00:17:32.373 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:17:32.631 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:17:32.631 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:17:32.891 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:17:33.149 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:17:33.150 LINK bdev_raid_sb_ut 00:17:33.150 LINK dev_ut 00:17:33.150 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:17:33.408 LINK vbdev_zone_block_ut 00:17:33.408 LINK nvme_ns_cmd_ut 00:17:33.408 LINK scsi_ut 00:17:33.408 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:17:33.408 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:17:33.666 LINK nvme_ctrlr_ut 00:17:33.666 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:17:33.926 LINK lun_ut 00:17:33.926 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:17:33.926 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:17:34.197 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:17:34.197 LINK concat_ut 00:17:34.197 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:17:34.456 LINK raid1_ut 00:17:34.456 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:17:34.714 LINK nvme_poll_group_ut 00:17:34.714 LINK blob_ut 00:17:34.714 CC test/unit/lib/sock/sock.c/sock_ut.o 00:17:34.972 CC test/unit/lib/thread/thread.c/thread_ut.o 00:17:34.972 LINK nvme_ns_ocssd_cmd_ut 00:17:35.230 LINK scsi_pr_ut 00:17:35.230 CC test/unit/lib/sock/posix.c/posix_ut.o 00:17:35.230 LINK raid5f_ut 00:17:35.488 LINK nvme_pcie_ut 00:17:35.488 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:17:35.488 LINK scsi_bdev_ut 00:17:35.488 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:17:35.746 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:17:35.746 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:17:35.746 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:17:36.311 LINK iobuf_ut 00:17:36.311 LINK nvme_quirks_ut 00:17:36.311 LINK sock_ut 00:17:36.569 LINK posix_ut 00:17:36.569 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:17:36.827 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:17:36.827 LINK nvme_qpair_ut 00:17:36.827 LINK tcp_ut 00:17:36.827 CC test/unit/lib/util/base64.c/base64_ut.o 00:17:36.827 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:17:37.085 LINK base64_ut 00:17:37.085 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:17:37.364 LINK pci_event_ut 00:17:37.364 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:17:37.364 LINK thread_ut 00:17:37.631 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:17:37.631 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:17:37.888 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:17:37.888 LINK bit_array_ut 00:17:38.145 LINK bdev_nvme_ut 00:17:38.145 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:17:38.145 LINK nvme_io_msg_ut 00:17:38.145 LINK nvme_transport_ut 00:17:38.402 LINK nvme_fabric_ut 00:17:38.660 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:17:38.660 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:17:38.660 LINK cpuset_ut 00:17:38.660 LINK nvme_tcp_ut 00:17:38.660 LINK ctrlr_discovery_ut 00:17:38.660 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:17:38.917 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:17:38.917 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:17:38.917 LINK ctrlr_ut 00:17:38.917 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:17:38.917 LINK subsystem_ut 00:17:39.174 LINK crc16_ut 00:17:39.174 LINK crc32_ieee_ut 00:17:39.174 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:17:39.431 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:17:39.431 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:17:39.431 LINK nvme_pcie_common_ut 00:17:39.431 LINK crc32c_ut 00:17:39.431 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:17:39.431 CC test/unit/lib/util/dif.c/dif_ut.o 00:17:39.689 LINK nvme_opal_ut 00:17:39.690 LINK crc64_ut 00:17:39.690 LINK ctrlr_bdev_ut 00:17:39.690 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:17:39.947 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:17:39.948 LINK nvmf_ut 00:17:39.948 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:17:39.948 CC test/unit/lib/util/iov.c/iov_ut.o 00:17:40.205 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:17:40.205 LINK subsystem_ut 00:17:40.205 LINK iov_ut 00:17:40.463 CC test/unit/lib/util/math.c/math_ut.o 00:17:40.463 LINK rpc_ut 00:17:40.463 LINK auth_ut 00:17:40.721 LINK math_ut 00:17:40.721 LINK dif_ut 00:17:40.721 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:17:40.721 LINK rpc_ut 00:17:40.721 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:17:40.721 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:17:40.979 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:17:40.979 CC test/unit/lib/util/string.c/string_ut.o 00:17:40.979 LINK keyring_ut 00:17:40.979 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:17:41.237 CC test/unit/lib/rdma/common.c/common_ut.o 00:17:41.237 LINK idxd_user_ut 00:17:41.237 CC test/unit/lib/util/xor.c/xor_ut.o 00:17:41.494 LINK string_ut 00:17:41.494 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:17:41.752 LINK xor_ut 00:17:41.752 LINK pipe_ut 00:17:41.752 LINK common_ut 00:17:41.752 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:17:42.009 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:17:42.009 LINK nvme_rdma_ut 00:17:42.009 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:17:42.009 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:17:42.009 LINK ftl_l2p_ut 00:17:42.267 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:17:42.267 LINK ftl_bitmap_ut 00:17:42.267 LINK idxd_ut 00:17:42.267 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:17:42.524 LINK nvme_cuse_ut 00:17:42.524 LINK ftl_io_ut 00:17:42.524 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:17:42.524 LINK ftl_mempool_ut 00:17:42.524 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:17:42.781 LINK transport_ut 00:17:43.053 LINK ftl_mngt_ut 00:17:43.053 LINK rdma_ut 00:17:43.311 LINK vhost_ut 00:17:43.311 LINK ftl_band_ut 00:17:43.876 LINK ftl_layout_upgrade_ut 00:17:44.133 LINK ftl_sb_ut 00:17:44.133 00:17:44.133 real 2m5.909s 00:17:44.133 user 10m17.691s 00:17:44.133 sys 2m12.535s 00:17:44.133 ************************************ 00:17:44.133 END TEST unittest_build 00:17:44.133 ************************************ 00:17:44.133 19:11:00 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:17:44.133 19:11:00 -- common/autotest_common.sh@10 -- $ set +x 00:17:44.502 19:11:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:17:44.502 19:11:00 -- pm/common@30 -- $ signal_monitor_resources TERM 00:17:44.502 19:11:00 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:17:44.502 19:11:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:44.502 19:11:00 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:44.502 19:11:00 -- pm/common@45 -- $ pid=2388 00:17:44.502 19:11:00 -- pm/common@52 -- $ sudo kill -TERM 2388 00:17:44.502 19:11:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:44.502 19:11:00 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:44.502 19:11:00 -- pm/common@45 -- $ pid=2387 00:17:44.502 19:11:00 -- pm/common@52 -- $ sudo kill -TERM 2387 00:17:44.502 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:17:44.502 19:11:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:44.502 19:11:00 -- nvmf/common.sh@7 -- # uname -s 00:17:44.502 19:11:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.502 19:11:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.502 19:11:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.502 19:11:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.502 19:11:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.502 19:11:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.502 19:11:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.502 19:11:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.502 19:11:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.502 19:11:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.502 19:11:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:11134a65-2d3b-468f-b7da-f7be7c663939 00:17:44.502 19:11:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=11134a65-2d3b-468f-b7da-f7be7c663939 00:17:44.502 19:11:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.502 19:11:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.502 19:11:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:44.502 19:11:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.502 19:11:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:44.502 19:11:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.502 19:11:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.502 19:11:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.502 19:11:00 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:17:44.502 19:11:00 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:17:44.502 19:11:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:17:44.502 19:11:00 -- paths/export.sh@5 -- # export PATH 00:17:44.502 19:11:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:17:44.502 19:11:00 -- nvmf/common.sh@47 -- # : 0 00:17:44.502 19:11:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.502 19:11:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.502 19:11:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.502 19:11:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.502 19:11:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.502 19:11:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.502 19:11:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.502 19:11:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.502 19:11:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:17:44.502 19:11:00 -- spdk/autotest.sh@32 -- # uname -s 00:17:44.502 19:11:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:17:44.502 19:11:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:17:44.502 19:11:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:44.502 19:11:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:17:44.502 19:11:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:44.502 19:11:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:17:45.068 19:11:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:17:45.068 19:11:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:17:45.068 19:11:00 -- spdk/autotest.sh@48 -- # udevadm_pid=98370 00:17:45.068 19:11:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:17:45.068 19:11:00 -- pm/common@17 -- # local monitor 00:17:45.068 19:11:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:45.068 19:11:00 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=98371 00:17:45.068 19:11:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:45.068 19:11:00 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=98376 00:17:45.068 19:11:00 -- pm/common@26 -- # sleep 1 00:17:45.068 19:11:00 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:17:45.068 19:11:00 -- pm/common@21 -- # date +%s 00:17:45.068 19:11:00 -- pm/common@21 -- # date +%s 00:17:45.068 19:11:00 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713467460 00:17:45.068 19:11:00 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713467460 00:17:45.068 sudosudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:17:45.068 : /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:17:45.068 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713467460_collect-vmstat.pm.log 00:17:45.068 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713467460_collect-cpu-load.pm.log 00:17:46.002 19:11:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:17:46.002 19:11:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:17:46.002 19:11:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:46.002 19:11:01 -- common/autotest_common.sh@10 -- # set +x 00:17:46.002 19:11:01 -- spdk/autotest.sh@59 -- # create_test_list 00:17:46.002 19:11:01 -- common/autotest_common.sh@734 -- # xtrace_disable 00:17:46.002 19:11:01 -- common/autotest_common.sh@10 -- # set +x 00:17:46.002 19:11:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:17:46.002 19:11:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:17:46.002 19:11:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:17:46.002 19:11:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:17:46.002 19:11:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:17:46.002 19:11:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:17:46.002 19:11:01 -- common/autotest_common.sh@1441 -- # uname 00:17:46.002 19:11:01 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:17:46.002 19:11:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:17:46.002 19:11:01 -- common/autotest_common.sh@1461 -- # uname 00:17:46.002 19:11:01 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:17:46.002 19:11:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:17:46.002 19:11:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:17:46.002 19:11:01 -- spdk/autotest.sh@72 -- # hash lcov 00:17:46.002 19:11:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:17:46.002 19:11:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:17:46.002 --rc lcov_branch_coverage=1 00:17:46.002 --rc lcov_function_coverage=1 00:17:46.002 --rc genhtml_branch_coverage=1 00:17:46.002 --rc genhtml_function_coverage=1 00:17:46.002 --rc genhtml_legend=1 00:17:46.002 --rc geninfo_all_blocks=1 00:17:46.002 ' 00:17:46.002 19:11:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:17:46.002 --rc lcov_branch_coverage=1 00:17:46.002 --rc lcov_function_coverage=1 00:17:46.002 --rc genhtml_branch_coverage=1 00:17:46.002 --rc genhtml_function_coverage=1 00:17:46.002 --rc genhtml_legend=1 00:17:46.002 --rc geninfo_all_blocks=1 00:17:46.002 ' 00:17:46.002 19:11:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:17:46.002 --rc lcov_branch_coverage=1 00:17:46.002 --rc lcov_function_coverage=1 00:17:46.002 --rc genhtml_branch_coverage=1 00:17:46.002 --rc genhtml_function_coverage=1 00:17:46.002 --rc genhtml_legend=1 00:17:46.002 --rc geninfo_all_blocks=1 00:17:46.002 --no-external' 00:17:46.002 19:11:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:17:46.002 --rc lcov_branch_coverage=1 00:17:46.002 --rc lcov_function_coverage=1 00:17:46.002 --rc genhtml_branch_coverage=1 00:17:46.002 --rc genhtml_function_coverage=1 00:17:46.002 --rc genhtml_legend=1 00:17:46.002 --rc geninfo_all_blocks=1 00:17:46.002 --no-external' 00:17:46.002 19:11:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:17:46.260 lcov: LCOV version 1.15 00:17:46.260 19:11:02 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:17:48.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:17:48.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:17:48.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:17:48.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:17:49.060 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:17:49.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:18:35.744 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:18:35.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:18:35.744 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:18:35.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:18:35.744 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:18:35.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:18:41.055 19:11:56 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:18:41.055 19:11:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:41.055 19:11:56 -- common/autotest_common.sh@10 -- # set +x 00:18:41.055 19:11:56 -- spdk/autotest.sh@91 -- # rm -f 00:18:41.055 19:11:56 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:41.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:18:41.055 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:18:41.324 19:11:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:18:41.324 19:11:57 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:18:41.324 19:11:57 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:18:41.324 19:11:57 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:18:41.324 19:11:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:41.324 19:11:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:18:41.324 19:11:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:41.324 19:11:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:41.324 19:11:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:41.324 19:11:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:18:41.324 19:11:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:18:41.324 19:11:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:18:41.324 19:11:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:18:41.324 19:11:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:18:41.324 19:11:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:18:41.324 No valid GPT data, bailing 00:18:41.324 19:11:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:41.324 19:11:57 -- scripts/common.sh@391 -- # pt= 00:18:41.324 19:11:57 -- scripts/common.sh@392 -- # return 1 00:18:41.324 19:11:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:18:41.324 1+0 records in 00:18:41.324 1+0 records out 00:18:41.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222658 s, 47.1 MB/s 00:18:41.324 19:11:57 -- spdk/autotest.sh@118 -- # sync 00:18:41.585 19:11:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:18:41.585 19:11:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:18:41.585 19:11:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:18:43.488 19:11:58 -- spdk/autotest.sh@124 -- # uname -s 00:18:43.488 19:11:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:18:43.488 19:11:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:18:43.488 19:11:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:43.488 19:11:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:43.488 19:11:58 -- common/autotest_common.sh@10 -- # set +x 00:18:43.488 ************************************ 00:18:43.488 START TEST setup.sh 00:18:43.488 ************************************ 00:18:43.488 19:11:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:18:43.488 * Looking for test storage... 00:18:43.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:18:43.488 19:11:59 -- setup/test-setup.sh@10 -- # uname -s 00:18:43.488 19:11:59 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:18:43.488 19:11:59 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:18:43.488 19:11:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:43.488 19:11:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:43.488 19:11:59 -- common/autotest_common.sh@10 -- # set +x 00:18:43.488 ************************************ 00:18:43.488 START TEST acl 00:18:43.488 ************************************ 00:18:43.488 19:11:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:18:43.488 * Looking for test storage... 00:18:43.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:18:43.488 19:11:59 -- setup/acl.sh@10 -- # get_zoned_devs 00:18:43.488 19:11:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:18:43.488 19:11:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:18:43.488 19:11:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:18:43.488 19:11:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:43.488 19:11:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:18:43.488 19:11:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:43.488 19:11:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:43.488 19:11:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:43.488 19:11:59 -- setup/acl.sh@12 -- # devs=() 00:18:43.488 19:11:59 -- setup/acl.sh@12 -- # declare -a devs 00:18:43.488 19:11:59 -- setup/acl.sh@13 -- # drivers=() 00:18:43.488 19:11:59 -- setup/acl.sh@13 -- # declare -A drivers 00:18:43.488 19:11:59 -- setup/acl.sh@51 -- # setup reset 00:18:43.488 19:11:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:43.488 19:11:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:44.054 19:11:59 -- setup/acl.sh@52 -- # collect_setup_devs 00:18:44.054 19:11:59 -- setup/acl.sh@16 -- # local dev driver 00:18:44.054 19:11:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:44.054 19:11:59 -- setup/acl.sh@15 -- # setup output status 00:18:44.054 19:11:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:44.054 19:11:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:18:44.313 19:12:00 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:18:44.313 19:12:00 -- setup/acl.sh@19 -- # continue 00:18:44.313 19:12:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:44.313 Hugepages 00:18:44.313 node hugesize free / total 00:18:44.313 19:12:00 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:18:44.313 19:12:00 -- setup/acl.sh@19 -- # continue 00:18:44.313 19:12:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:44.313 00:18:44.313 Type BDF Vendor Device NUMA Driver Device Block devices 00:18:44.313 19:12:00 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:18:44.313 19:12:00 -- setup/acl.sh@19 -- # continue 00:18:44.313 19:12:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:44.571 19:12:00 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:18:44.571 19:12:00 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:18:44.571 19:12:00 -- setup/acl.sh@20 -- # continue 00:18:44.571 19:12:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:44.571 19:12:00 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:18:44.571 19:12:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:18:44.571 19:12:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:18:44.571 19:12:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:18:44.571 19:12:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:18:44.571 19:12:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:44.571 19:12:00 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:18:44.571 19:12:00 -- setup/acl.sh@54 -- # run_test denied denied 00:18:44.571 19:12:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:44.571 19:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:44.571 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:18:44.571 ************************************ 00:18:44.571 START TEST denied 00:18:44.571 ************************************ 00:18:44.571 19:12:00 -- common/autotest_common.sh@1111 -- # denied 00:18:44.571 19:12:00 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:18:44.571 19:12:00 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:18:44.571 19:12:00 -- setup/acl.sh@38 -- # setup output config 00:18:44.571 19:12:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:44.571 19:12:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:18:45.945 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:18:45.945 19:12:01 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:18:45.945 19:12:01 -- setup/acl.sh@28 -- # local dev driver 00:18:45.945 19:12:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:18:45.945 19:12:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:18:45.945 19:12:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:18:45.945 19:12:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:18:45.945 19:12:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:18:45.945 19:12:01 -- setup/acl.sh@41 -- # setup reset 00:18:45.945 19:12:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:45.945 19:12:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:46.510 ************************************ 00:18:46.510 END TEST denied 00:18:46.510 ************************************ 00:18:46.510 00:18:46.510 real 0m1.771s 00:18:46.510 user 0m0.455s 00:18:46.510 sys 0m1.356s 00:18:46.510 19:12:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:46.510 19:12:02 -- common/autotest_common.sh@10 -- # set +x 00:18:46.510 19:12:02 -- setup/acl.sh@55 -- # run_test allowed allowed 00:18:46.510 19:12:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:46.510 19:12:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.510 19:12:02 -- common/autotest_common.sh@10 -- # set +x 00:18:46.510 ************************************ 00:18:46.510 START TEST allowed 00:18:46.510 ************************************ 00:18:46.510 19:12:02 -- common/autotest_common.sh@1111 -- # allowed 00:18:46.510 19:12:02 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:18:46.510 19:12:02 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:18:46.510 19:12:02 -- setup/acl.sh@45 -- # setup output config 00:18:46.510 19:12:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:46.510 19:12:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:18:48.415 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:48.415 19:12:03 -- setup/acl.sh@47 -- # verify 00:18:48.415 19:12:03 -- setup/acl.sh@28 -- # local dev driver 00:18:48.415 19:12:03 -- setup/acl.sh@48 -- # setup reset 00:18:48.415 19:12:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:48.415 19:12:03 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:48.673 ************************************ 00:18:48.673 END TEST allowed 00:18:48.673 ************************************ 00:18:48.673 00:18:48.673 real 0m2.039s 00:18:48.673 user 0m0.471s 00:18:48.673 sys 0m1.534s 00:18:48.673 19:12:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:48.673 19:12:04 -- common/autotest_common.sh@10 -- # set +x 00:18:48.673 ************************************ 00:18:48.673 END TEST acl 00:18:48.673 ************************************ 00:18:48.673 00:18:48.673 real 0m5.296s 00:18:48.673 user 0m1.693s 00:18:48.673 sys 0m3.637s 00:18:48.673 19:12:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:48.673 19:12:04 -- common/autotest_common.sh@10 -- # set +x 00:18:48.673 19:12:04 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:18:48.673 19:12:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:48.673 19:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.673 19:12:04 -- common/autotest_common.sh@10 -- # set +x 00:18:48.673 ************************************ 00:18:48.673 START TEST hugepages 00:18:48.673 ************************************ 00:18:48.673 19:12:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:18:48.939 * Looking for test storage... 00:18:48.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:18:48.939 19:12:04 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:18:48.939 19:12:04 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:18:48.939 19:12:04 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:18:48.939 19:12:04 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:18:48.939 19:12:04 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:18:48.939 19:12:04 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:18:48.939 19:12:04 -- setup/common.sh@17 -- # local get=Hugepagesize 00:18:48.940 19:12:04 -- setup/common.sh@18 -- # local node= 00:18:48.940 19:12:04 -- setup/common.sh@19 -- # local var val 00:18:48.940 19:12:04 -- setup/common.sh@20 -- # local mem_f mem 00:18:48.940 19:12:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:48.940 19:12:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:48.940 19:12:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:48.940 19:12:04 -- setup/common.sh@28 -- # mapfile -t mem 00:18:48.940 19:12:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 2569644 kB' 'MemAvailable: 7299140 kB' 'Buffers: 38120 kB' 'Cached: 4803204 kB' 'SwapCached: 0 kB' 'Active: 1324484 kB' 'Inactive: 3744680 kB' 'Active(anon): 236860 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1087624 kB' 'Inactive(file): 3742872 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 105328 kB' 'Writeback: 200 kB' 'AnonPages: 246316 kB' 'Mapped: 78496 kB' 'Shmem: 2632 kB' 'KReclaimable: 220864 kB' 'Slab: 320132 kB' 'SReclaimable: 220864 kB' 'SUnreclaim: 99268 kB' 'KernelStack: 4916 kB' 'PageTables: 4668 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028400 kB' 'Committed_AS: 767148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.940 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.940 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # continue 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # IFS=': ' 00:18:48.941 19:12:04 -- setup/common.sh@31 -- # read -r var val _ 00:18:48.941 19:12:04 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:48.941 19:12:04 -- setup/common.sh@33 -- # echo 2048 00:18:48.941 19:12:04 -- setup/common.sh@33 -- # return 0 00:18:48.941 19:12:04 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:18:48.941 19:12:04 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:18:48.941 19:12:04 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:18:48.941 19:12:04 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:18:48.941 19:12:04 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:18:48.941 19:12:04 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:18:48.941 19:12:04 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:18:48.941 19:12:04 -- setup/hugepages.sh@207 -- # get_nodes 00:18:48.941 19:12:04 -- setup/hugepages.sh@27 -- # local node 00:18:48.941 19:12:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:48.941 19:12:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:18:48.941 19:12:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:18:48.941 19:12:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:48.941 19:12:04 -- setup/hugepages.sh@208 -- # clear_hp 00:18:48.941 19:12:04 -- setup/hugepages.sh@37 -- # local node hp 00:18:48.941 19:12:04 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:18:48.941 19:12:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:48.941 19:12:04 -- setup/hugepages.sh@41 -- # echo 0 00:18:48.941 19:12:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:48.941 19:12:04 -- setup/hugepages.sh@41 -- # echo 0 00:18:48.941 19:12:04 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:18:48.941 19:12:04 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:18:48.941 19:12:04 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:18:48.941 19:12:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:48.941 19:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.941 19:12:04 -- common/autotest_common.sh@10 -- # set +x 00:18:48.941 ************************************ 00:18:48.941 START TEST default_setup 00:18:48.941 ************************************ 00:18:48.941 19:12:04 -- common/autotest_common.sh@1111 -- # default_setup 00:18:48.941 19:12:04 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:18:48.941 19:12:04 -- setup/hugepages.sh@49 -- # local size=2097152 00:18:48.941 19:12:04 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:18:48.941 19:12:04 -- setup/hugepages.sh@51 -- # shift 00:18:48.941 19:12:04 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:18:48.941 19:12:04 -- setup/hugepages.sh@52 -- # local node_ids 00:18:48.941 19:12:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:48.941 19:12:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:18:48.941 19:12:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:18:48.941 19:12:04 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:18:48.941 19:12:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:18:48.941 19:12:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:18:48.941 19:12:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:18:48.941 19:12:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:48.941 19:12:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:48.941 19:12:04 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:18:48.941 19:12:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:18:48.941 19:12:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:18:48.941 19:12:04 -- setup/hugepages.sh@73 -- # return 0 00:18:48.941 19:12:04 -- setup/hugepages.sh@137 -- # setup output 00:18:48.941 19:12:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:48.941 19:12:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:49.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:18:49.507 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:50.076 19:12:05 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:18:50.076 19:12:05 -- setup/hugepages.sh@89 -- # local node 00:18:50.076 19:12:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:18:50.076 19:12:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:18:50.076 19:12:05 -- setup/hugepages.sh@92 -- # local surp 00:18:50.076 19:12:05 -- setup/hugepages.sh@93 -- # local resv 00:18:50.076 19:12:05 -- setup/hugepages.sh@94 -- # local anon 00:18:50.076 19:12:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:50.076 19:12:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:50.076 19:12:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:50.076 19:12:05 -- setup/common.sh@18 -- # local node= 00:18:50.076 19:12:05 -- setup/common.sh@19 -- # local var val 00:18:50.076 19:12:05 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.076 19:12:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.076 19:12:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:50.076 19:12:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:50.076 19:12:05 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.076 19:12:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4823252 kB' 'MemAvailable: 9445916 kB' 'Buffers: 38120 kB' 'Cached: 4698912 kB' 'SwapCached: 0 kB' 'Active: 1322036 kB' 'Inactive: 3593700 kB' 'Active(anon): 187724 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1134312 kB' 'Inactive(file): 3591888 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 105172 kB' 'Writeback: 0 kB' 'AnonPages: 197272 kB' 'Mapped: 78040 kB' 'Shmem: 2632 kB' 'KReclaimable: 218328 kB' 'Slab: 318528 kB' 'SReclaimable: 218328 kB' 'SUnreclaim: 100200 kB' 'KernelStack: 4832 kB' 'PageTables: 4180 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 722232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.076 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.076 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.077 19:12:05 -- setup/common.sh@33 -- # echo 0 00:18:50.077 19:12:05 -- setup/common.sh@33 -- # return 0 00:18:50.077 19:12:05 -- setup/hugepages.sh@97 -- # anon=0 00:18:50.077 19:12:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:50.077 19:12:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:50.077 19:12:05 -- setup/common.sh@18 -- # local node= 00:18:50.077 19:12:05 -- setup/common.sh@19 -- # local var val 00:18:50.077 19:12:05 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.077 19:12:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.077 19:12:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:50.077 19:12:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:50.077 19:12:05 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.077 19:12:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4821236 kB' 'MemAvailable: 9443900 kB' 'Buffers: 38120 kB' 'Cached: 4698912 kB' 'SwapCached: 0 kB' 'Active: 1327236 kB' 'Inactive: 3590580 kB' 'Active(anon): 189804 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1137432 kB' 'Inactive(file): 3588768 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 105172 kB' 'Writeback: 0 kB' 'AnonPages: 199352 kB' 'Mapped: 78040 kB' 'Shmem: 2632 kB' 'KReclaimable: 218328 kB' 'Slab: 318528 kB' 'SReclaimable: 218328 kB' 'SUnreclaim: 100200 kB' 'KernelStack: 4832 kB' 'PageTables: 4180 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 722232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.077 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.077 19:12:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.078 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.078 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.079 19:12:05 -- setup/common.sh@33 -- # echo 0 00:18:50.079 19:12:05 -- setup/common.sh@33 -- # return 0 00:18:50.079 19:12:05 -- setup/hugepages.sh@99 -- # surp=0 00:18:50.079 19:12:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:50.079 19:12:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:50.079 19:12:05 -- setup/common.sh@18 -- # local node= 00:18:50.079 19:12:05 -- setup/common.sh@19 -- # local var val 00:18:50.079 19:12:05 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.079 19:12:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.079 19:12:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:50.079 19:12:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:50.079 19:12:05 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.079 19:12:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4820716 kB' 'MemAvailable: 9443056 kB' 'Buffers: 38120 kB' 'Cached: 4698912 kB' 'SwapCached: 0 kB' 'Active: 1331264 kB' 'Inactive: 3587716 kB' 'Active(anon): 191232 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1140032 kB' 'Inactive(file): 3585908 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 105172 kB' 'Writeback: 0 kB' 'AnonPages: 200600 kB' 'Mapped: 78020 kB' 'Shmem: 2632 kB' 'KReclaimable: 218264 kB' 'Slab: 318456 kB' 'SReclaimable: 218264 kB' 'SUnreclaim: 100192 kB' 'KernelStack: 4796 kB' 'PageTables: 4208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 727716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.079 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.079 19:12:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.080 19:12:05 -- setup/common.sh@33 -- # echo 0 00:18:50.080 19:12:05 -- setup/common.sh@33 -- # return 0 00:18:50.080 nr_hugepages=1024 00:18:50.080 resv_hugepages=0 00:18:50.080 surplus_hugepages=0 00:18:50.080 anon_hugepages=0 00:18:50.080 19:12:05 -- setup/hugepages.sh@100 -- # resv=0 00:18:50.080 19:12:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:18:50.080 19:12:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:50.080 19:12:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:50.080 19:12:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:50.080 19:12:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:50.080 19:12:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:18:50.080 19:12:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:50.080 19:12:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:50.080 19:12:05 -- setup/common.sh@18 -- # local node= 00:18:50.080 19:12:05 -- setup/common.sh@19 -- # local var val 00:18:50.080 19:12:05 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.080 19:12:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.080 19:12:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:50.080 19:12:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:50.080 19:12:05 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.080 19:12:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4819180 kB' 'MemAvailable: 9441780 kB' 'Buffers: 38120 kB' 'Cached: 4698912 kB' 'SwapCached: 0 kB' 'Active: 1335720 kB' 'Inactive: 3584860 kB' 'Active(anon): 192568 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1143152 kB' 'Inactive(file): 3583048 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 105172 kB' 'Writeback: 0 kB' 'AnonPages: 201792 kB' 'Mapped: 78020 kB' 'Shmem: 2632 kB' 'KReclaimable: 218264 kB' 'Slab: 318456 kB' 'SReclaimable: 218264 kB' 'SUnreclaim: 100192 kB' 'KernelStack: 4848 kB' 'PageTables: 4180 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 731600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.080 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.080 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.081 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.081 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.081 19:12:05 -- setup/common.sh@33 -- # echo 1024 00:18:50.081 19:12:05 -- setup/common.sh@33 -- # return 0 00:18:50.081 19:12:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:50.082 19:12:05 -- setup/hugepages.sh@112 -- # get_nodes 00:18:50.082 19:12:05 -- setup/hugepages.sh@27 -- # local node 00:18:50.082 19:12:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:50.082 19:12:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:18:50.082 19:12:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:18:50.082 19:12:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:50.082 19:12:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:50.082 19:12:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:50.082 19:12:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:50.082 19:12:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:50.082 19:12:05 -- setup/common.sh@18 -- # local node=0 00:18:50.082 19:12:05 -- setup/common.sh@19 -- # local var val 00:18:50.082 19:12:05 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.082 19:12:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.082 19:12:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:50.082 19:12:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:50.082 19:12:05 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.082 19:12:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4817376 kB' 'MemUsed: 7433728 kB' 'Active: 1339756 kB' 'Inactive: 3582000 kB' 'Active(anon): 194004 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1145752 kB' 'Inactive(file): 3580188 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 105172 kB' 'Writeback: 0 kB' 'FilePages: 4737032 kB' 'Mapped: 78020 kB' 'AnonPages: 203368 kB' 'Shmem: 2632 kB' 'KernelStack: 4916 kB' 'PageTables: 4184 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 218264 kB' 'Slab: 318456 kB' 'SReclaimable: 218264 kB' 'SUnreclaim: 100192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.082 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.082 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.083 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.083 19:12:05 -- setup/common.sh@32 -- # continue 00:18:50.083 19:12:05 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.083 19:12:05 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.083 19:12:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.083 19:12:05 -- setup/common.sh@33 -- # echo 0 00:18:50.083 19:12:05 -- setup/common.sh@33 -- # return 0 00:18:50.083 19:12:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:50.083 19:12:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:50.083 19:12:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:50.083 19:12:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:50.083 19:12:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:18:50.083 node0=1024 expecting 1024 00:18:50.083 19:12:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:18:50.083 00:18:50.083 real 0m1.206s 00:18:50.083 user 0m0.305s 00:18:50.083 sys 0m0.817s 00:18:50.083 19:12:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:50.083 19:12:05 -- common/autotest_common.sh@10 -- # set +x 00:18:50.083 ************************************ 00:18:50.083 END TEST default_setup 00:18:50.083 ************************************ 00:18:50.083 19:12:05 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:18:50.083 19:12:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:50.083 19:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:50.083 19:12:05 -- common/autotest_common.sh@10 -- # set +x 00:18:50.342 ************************************ 00:18:50.342 START TEST per_node_1G_alloc 00:18:50.342 ************************************ 00:18:50.342 19:12:06 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:18:50.342 19:12:06 -- setup/hugepages.sh@143 -- # local IFS=, 00:18:50.342 19:12:06 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:18:50.342 19:12:06 -- setup/hugepages.sh@49 -- # local size=1048576 00:18:50.342 19:12:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:18:50.342 19:12:06 -- setup/hugepages.sh@51 -- # shift 00:18:50.342 19:12:06 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:18:50.342 19:12:06 -- setup/hugepages.sh@52 -- # local node_ids 00:18:50.342 19:12:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:50.342 19:12:06 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:18:50.342 19:12:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:18:50.342 19:12:06 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:18:50.342 19:12:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:18:50.342 19:12:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:18:50.342 19:12:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:18:50.342 19:12:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:50.342 19:12:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:50.342 19:12:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:18:50.342 19:12:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:18:50.342 19:12:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:18:50.342 19:12:06 -- setup/hugepages.sh@73 -- # return 0 00:18:50.342 19:12:06 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:18:50.342 19:12:06 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:18:50.342 19:12:06 -- setup/hugepages.sh@146 -- # setup output 00:18:50.342 19:12:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:50.342 19:12:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:50.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:18:50.601 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:50.862 19:12:06 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:18:50.862 19:12:06 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:18:50.862 19:12:06 -- setup/hugepages.sh@89 -- # local node 00:18:50.862 19:12:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:18:50.862 19:12:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:18:50.862 19:12:06 -- setup/hugepages.sh@92 -- # local surp 00:18:50.862 19:12:06 -- setup/hugepages.sh@93 -- # local resv 00:18:50.862 19:12:06 -- setup/hugepages.sh@94 -- # local anon 00:18:50.862 19:12:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:50.862 19:12:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:50.862 19:12:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:50.862 19:12:06 -- setup/common.sh@18 -- # local node= 00:18:50.862 19:12:06 -- setup/common.sh@19 -- # local var val 00:18:50.862 19:12:06 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.862 19:12:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.862 19:12:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:50.862 19:12:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:50.862 19:12:06 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.862 19:12:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5821980 kB' 'MemAvailable: 10463264 kB' 'Buffers: 38120 kB' 'Cached: 4718208 kB' 'SwapCached: 0 kB' 'Active: 1513736 kB' 'Inactive: 3450672 kB' 'Active(anon): 217260 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1296476 kB' 'Inactive(file): 3448860 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 124472 kB' 'Writeback: 0 kB' 'AnonPages: 226448 kB' 'Mapped: 78112 kB' 'Shmem: 2632 kB' 'KReclaimable: 218316 kB' 'Slab: 318028 kB' 'SReclaimable: 218316 kB' 'SUnreclaim: 99712 kB' 'KernelStack: 4792 kB' 'PageTables: 4024 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 758072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.862 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.862 19:12:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:50.863 19:12:06 -- setup/common.sh@33 -- # echo 0 00:18:50.863 19:12:06 -- setup/common.sh@33 -- # return 0 00:18:50.863 19:12:06 -- setup/hugepages.sh@97 -- # anon=0 00:18:50.863 19:12:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:50.863 19:12:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:50.863 19:12:06 -- setup/common.sh@18 -- # local node= 00:18:50.863 19:12:06 -- setup/common.sh@19 -- # local var val 00:18:50.863 19:12:06 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.863 19:12:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.863 19:12:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:50.863 19:12:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:50.863 19:12:06 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.863 19:12:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5806860 kB' 'MemAvailable: 10463728 kB' 'Buffers: 38120 kB' 'Cached: 4732564 kB' 'SwapCached: 0 kB' 'Active: 1513996 kB' 'Inactive: 3465232 kB' 'Active(anon): 217520 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1296476 kB' 'Inactive(file): 3463420 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 139032 kB' 'Writeback: 0 kB' 'AnonPages: 226708 kB' 'Mapped: 78112 kB' 'Shmem: 2632 kB' 'KReclaimable: 218836 kB' 'Slab: 318548 kB' 'SReclaimable: 218836 kB' 'SUnreclaim: 99712 kB' 'KernelStack: 4792 kB' 'PageTables: 4024 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 758072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.863 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.863 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:50.864 19:12:06 -- setup/common.sh@33 -- # echo 0 00:18:50.864 19:12:06 -- setup/common.sh@33 -- # return 0 00:18:50.864 19:12:06 -- setup/hugepages.sh@99 -- # surp=0 00:18:50.864 19:12:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:50.864 19:12:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:50.864 19:12:06 -- setup/common.sh@18 -- # local node= 00:18:50.864 19:12:06 -- setup/common.sh@19 -- # local var val 00:18:50.864 19:12:06 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.864 19:12:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.864 19:12:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:50.864 19:12:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:50.864 19:12:06 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.864 19:12:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5788212 kB' 'MemAvailable: 10463800 kB' 'Buffers: 38120 kB' 'Cached: 4750800 kB' 'SwapCached: 0 kB' 'Active: 1514228 kB' 'Inactive: 3483432 kB' 'Active(anon): 217752 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1296476 kB' 'Inactive(file): 3481620 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 157232 kB' 'Writeback: 0 kB' 'AnonPages: 226940 kB' 'Mapped: 78112 kB' 'Shmem: 2632 kB' 'KReclaimable: 219356 kB' 'Slab: 319068 kB' 'SReclaimable: 219356 kB' 'SUnreclaim: 99712 kB' 'KernelStack: 4776 kB' 'PageTables: 3996 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 756876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.864 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.864 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.865 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.865 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:50.866 19:12:06 -- setup/common.sh@33 -- # echo 0 00:18:50.866 19:12:06 -- setup/common.sh@33 -- # return 0 00:18:50.866 nr_hugepages=512 00:18:50.866 resv_hugepages=0 00:18:50.866 surplus_hugepages=0 00:18:50.866 anon_hugepages=0 00:18:50.866 19:12:06 -- setup/hugepages.sh@100 -- # resv=0 00:18:50.866 19:12:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:18:50.866 19:12:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:50.866 19:12:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:50.866 19:12:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:50.866 19:12:06 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:18:50.866 19:12:06 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:18:50.866 19:12:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:50.866 19:12:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:50.866 19:12:06 -- setup/common.sh@18 -- # local node= 00:18:50.866 19:12:06 -- setup/common.sh@19 -- # local var val 00:18:50.866 19:12:06 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.866 19:12:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.866 19:12:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:50.866 19:12:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:50.866 19:12:06 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.866 19:12:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5788440 kB' 'MemAvailable: 10464028 kB' 'Buffers: 38120 kB' 'Cached: 4750800 kB' 'SwapCached: 0 kB' 'Active: 1513844 kB' 'Inactive: 3483432 kB' 'Active(anon): 217368 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1296476 kB' 'Inactive(file): 3481620 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 157232 kB' 'Writeback: 0 kB' 'AnonPages: 226512 kB' 'Mapped: 78112 kB' 'Shmem: 2632 kB' 'KReclaimable: 219356 kB' 'Slab: 319068 kB' 'SReclaimable: 219356 kB' 'SUnreclaim: 99712 kB' 'KernelStack: 4796 kB' 'PageTables: 3908 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 751016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.866 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.866 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # continue 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:50.867 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:50.867 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:50.867 19:12:06 -- setup/common.sh@33 -- # echo 512 00:18:50.867 19:12:06 -- setup/common.sh@33 -- # return 0 00:18:50.867 19:12:06 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:18:50.867 19:12:06 -- setup/hugepages.sh@112 -- # get_nodes 00:18:50.867 19:12:06 -- setup/hugepages.sh@27 -- # local node 00:18:50.867 19:12:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:50.867 19:12:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:18:50.867 19:12:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:18:50.867 19:12:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:50.867 19:12:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:50.867 19:12:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:50.867 19:12:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:50.867 19:12:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:50.867 19:12:06 -- setup/common.sh@18 -- # local node=0 00:18:50.867 19:12:06 -- setup/common.sh@19 -- # local var val 00:18:50.867 19:12:06 -- setup/common.sh@20 -- # local mem_f mem 00:18:50.867 19:12:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:50.867 19:12:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:50.867 19:12:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:50.867 19:12:06 -- setup/common.sh@28 -- # mapfile -t mem 00:18:50.867 19:12:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5770468 kB' 'MemUsed: 6480636 kB' 'Active: 1513544 kB' 'Inactive: 3501112 kB' 'Active(anon): 217068 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1296476 kB' 'Inactive(file): 3499300 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 175172 kB' 'Writeback: 0 kB' 'FilePages: 4806768 kB' 'Mapped: 78084 kB' 'AnonPages: 226612 kB' 'Shmem: 2632 kB' 'KernelStack: 4856 kB' 'PageTables: 4064 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219616 kB' 'Slab: 319376 kB' 'SReclaimable: 219616 kB' 'SUnreclaim: 99760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.126 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.126 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # continue 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.127 19:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.127 19:12:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.127 19:12:06 -- setup/common.sh@33 -- # echo 0 00:18:51.127 19:12:06 -- setup/common.sh@33 -- # return 0 00:18:51.127 node0=512 expecting 512 00:18:51.127 ************************************ 00:18:51.127 END TEST per_node_1G_alloc 00:18:51.127 ************************************ 00:18:51.127 19:12:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:51.127 19:12:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:51.127 19:12:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:51.127 19:12:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:51.127 19:12:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:18:51.127 19:12:06 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:18:51.127 00:18:51.127 real 0m0.758s 00:18:51.127 user 0m0.236s 00:18:51.127 sys 0m0.501s 00:18:51.127 19:12:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:51.127 19:12:06 -- common/autotest_common.sh@10 -- # set +x 00:18:51.127 19:12:06 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:18:51.127 19:12:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:51.127 19:12:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:51.127 19:12:06 -- common/autotest_common.sh@10 -- # set +x 00:18:51.127 ************************************ 00:18:51.127 START TEST even_2G_alloc 00:18:51.127 ************************************ 00:18:51.127 19:12:06 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:18:51.127 19:12:06 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:18:51.127 19:12:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:18:51.127 19:12:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:18:51.127 19:12:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:51.127 19:12:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:18:51.127 19:12:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:18:51.127 19:12:06 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:18:51.127 19:12:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:18:51.127 19:12:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:18:51.127 19:12:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:18:51.127 19:12:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:51.127 19:12:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:51.127 19:12:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:18:51.127 19:12:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:18:51.127 19:12:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:51.127 19:12:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:18:51.127 19:12:06 -- setup/hugepages.sh@83 -- # : 0 00:18:51.127 19:12:06 -- setup/hugepages.sh@84 -- # : 0 00:18:51.127 19:12:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:51.127 19:12:06 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:18:51.127 19:12:06 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:18:51.127 19:12:06 -- setup/hugepages.sh@153 -- # setup output 00:18:51.127 19:12:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:51.127 19:12:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:51.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:18:51.385 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:51.968 19:12:07 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:18:51.968 19:12:07 -- setup/hugepages.sh@89 -- # local node 00:18:51.968 19:12:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:18:51.968 19:12:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:18:51.968 19:12:07 -- setup/hugepages.sh@92 -- # local surp 00:18:51.968 19:12:07 -- setup/hugepages.sh@93 -- # local resv 00:18:51.968 19:12:07 -- setup/hugepages.sh@94 -- # local anon 00:18:51.968 19:12:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:51.968 19:12:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:51.968 19:12:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:51.968 19:12:07 -- setup/common.sh@18 -- # local node= 00:18:51.968 19:12:07 -- setup/common.sh@19 -- # local var val 00:18:51.968 19:12:07 -- setup/common.sh@20 -- # local mem_f mem 00:18:51.968 19:12:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:51.968 19:12:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:51.968 19:12:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:51.968 19:12:07 -- setup/common.sh@28 -- # mapfile -t mem 00:18:51.968 19:12:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:51.968 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.968 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.968 19:12:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4718216 kB' 'MemAvailable: 9447404 kB' 'Buffers: 38132 kB' 'Cached: 4803036 kB' 'SwapCached: 0 kB' 'Active: 1481936 kB' 'Inactive: 3535424 kB' 'Active(anon): 185340 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1296596 kB' 'Inactive(file): 3533612 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 195152 kB' 'Mapped: 129792 kB' 'Shmem: 2632 kB' 'KReclaimable: 220844 kB' 'Slab: 320604 kB' 'SReclaimable: 220844 kB' 'SUnreclaim: 99760 kB' 'KernelStack: 4848 kB' 'PageTables: 4380 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 779812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:51.968 19:12:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.968 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.968 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.968 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.969 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.969 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:51.970 19:12:07 -- setup/common.sh@33 -- # echo 0 00:18:51.970 19:12:07 -- setup/common.sh@33 -- # return 0 00:18:51.970 19:12:07 -- setup/hugepages.sh@97 -- # anon=0 00:18:51.970 19:12:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:51.970 19:12:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:51.970 19:12:07 -- setup/common.sh@18 -- # local node= 00:18:51.970 19:12:07 -- setup/common.sh@19 -- # local var val 00:18:51.970 19:12:07 -- setup/common.sh@20 -- # local mem_f mem 00:18:51.970 19:12:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:51.970 19:12:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:51.970 19:12:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:51.970 19:12:07 -- setup/common.sh@28 -- # mapfile -t mem 00:18:51.970 19:12:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4718548 kB' 'MemAvailable: 9447736 kB' 'Buffers: 38132 kB' 'Cached: 4803036 kB' 'SwapCached: 0 kB' 'Active: 1481884 kB' 'Inactive: 3535424 kB' 'Active(anon): 185288 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1296596 kB' 'Inactive(file): 3533612 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 194732 kB' 'Mapped: 129784 kB' 'Shmem: 2632 kB' 'KReclaimable: 220844 kB' 'Slab: 320348 kB' 'SReclaimable: 220844 kB' 'SUnreclaim: 99504 kB' 'KernelStack: 4832 kB' 'PageTables: 4128 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 779812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.970 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.970 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.971 19:12:07 -- setup/common.sh@33 -- # echo 0 00:18:51.971 19:12:07 -- setup/common.sh@33 -- # return 0 00:18:51.971 19:12:07 -- setup/hugepages.sh@99 -- # surp=0 00:18:51.971 19:12:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:51.971 19:12:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:51.971 19:12:07 -- setup/common.sh@18 -- # local node= 00:18:51.971 19:12:07 -- setup/common.sh@19 -- # local var val 00:18:51.971 19:12:07 -- setup/common.sh@20 -- # local mem_f mem 00:18:51.971 19:12:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:51.971 19:12:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:51.971 19:12:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:51.971 19:12:07 -- setup/common.sh@28 -- # mapfile -t mem 00:18:51.971 19:12:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4719080 kB' 'MemAvailable: 9448272 kB' 'Buffers: 38132 kB' 'Cached: 4803036 kB' 'SwapCached: 0 kB' 'Active: 1481528 kB' 'Inactive: 3535352 kB' 'Active(anon): 184860 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1296668 kB' 'Inactive(file): 3533540 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 194372 kB' 'Mapped: 129772 kB' 'Shmem: 2632 kB' 'KReclaimable: 220848 kB' 'Slab: 320312 kB' 'SReclaimable: 220848 kB' 'SUnreclaim: 99464 kB' 'KernelStack: 4756 kB' 'PageTables: 4176 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 785768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.971 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.971 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.972 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.972 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:51.972 19:12:07 -- setup/common.sh@33 -- # echo 0 00:18:51.972 19:12:07 -- setup/common.sh@33 -- # return 0 00:18:51.972 nr_hugepages=1024 00:18:51.972 resv_hugepages=0 00:18:51.972 surplus_hugepages=0 00:18:51.972 anon_hugepages=0 00:18:51.972 19:12:07 -- setup/hugepages.sh@100 -- # resv=0 00:18:51.972 19:12:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:18:51.972 19:12:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:51.973 19:12:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:51.973 19:12:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:51.973 19:12:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:51.973 19:12:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:18:51.973 19:12:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:51.973 19:12:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:51.973 19:12:07 -- setup/common.sh@18 -- # local node= 00:18:51.973 19:12:07 -- setup/common.sh@19 -- # local var val 00:18:51.973 19:12:07 -- setup/common.sh@20 -- # local mem_f mem 00:18:51.973 19:12:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:51.973 19:12:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:51.973 19:12:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:51.973 19:12:07 -- setup/common.sh@28 -- # mapfile -t mem 00:18:51.973 19:12:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4719276 kB' 'MemAvailable: 9448468 kB' 'Buffers: 38132 kB' 'Cached: 4803036 kB' 'SwapCached: 0 kB' 'Active: 1481656 kB' 'Inactive: 3535352 kB' 'Active(anon): 184988 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1296668 kB' 'Inactive(file): 3533540 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 194560 kB' 'Mapped: 129796 kB' 'Shmem: 2632 kB' 'KReclaimable: 220848 kB' 'Slab: 320324 kB' 'SReclaimable: 220848 kB' 'SUnreclaim: 99476 kB' 'KernelStack: 4740 kB' 'PageTables: 4060 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 790580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.973 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.973 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:51.974 19:12:07 -- setup/common.sh@33 -- # echo 1024 00:18:51.974 19:12:07 -- setup/common.sh@33 -- # return 0 00:18:51.974 19:12:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:51.974 19:12:07 -- setup/hugepages.sh@112 -- # get_nodes 00:18:51.974 19:12:07 -- setup/hugepages.sh@27 -- # local node 00:18:51.974 19:12:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:51.974 19:12:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:18:51.974 19:12:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:18:51.974 19:12:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:51.974 19:12:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:51.974 19:12:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:51.974 19:12:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:51.974 19:12:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:51.974 19:12:07 -- setup/common.sh@18 -- # local node=0 00:18:51.974 19:12:07 -- setup/common.sh@19 -- # local var val 00:18:51.974 19:12:07 -- setup/common.sh@20 -- # local mem_f mem 00:18:51.974 19:12:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:51.974 19:12:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:51.974 19:12:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:51.974 19:12:07 -- setup/common.sh@28 -- # mapfile -t mem 00:18:51.974 19:12:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4719544 kB' 'MemUsed: 7531560 kB' 'Active: 1481740 kB' 'Inactive: 3535252 kB' 'Active(anon): 184968 kB' 'Inactive(anon): 1816 kB' 'Active(file): 1296772 kB' 'Inactive(file): 3533436 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 209488 kB' 'Writeback: 0 kB' 'FilePages: 4841044 kB' 'Mapped: 129860 kB' 'AnonPages: 194408 kB' 'Shmem: 2636 kB' 'KernelStack: 4836 kB' 'PageTables: 4224 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220856 kB' 'Slab: 320528 kB' 'SReclaimable: 220856 kB' 'SUnreclaim: 99672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:51.974 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:51.974 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.233 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.233 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.233 19:12:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.233 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.233 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # continue 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:18:52.234 19:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:18:52.234 19:12:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:52.234 19:12:07 -- setup/common.sh@33 -- # echo 0 00:18:52.234 19:12:07 -- setup/common.sh@33 -- # return 0 00:18:52.234 19:12:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:52.234 19:12:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:52.234 19:12:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:52.234 19:12:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:52.234 19:12:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:18:52.234 node0=1024 expecting 1024 00:18:52.234 19:12:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:18:52.234 00:18:52.234 real 0m0.982s 00:18:52.234 user 0m0.262s 00:18:52.234 sys 0m0.686s 00:18:52.234 19:12:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:52.234 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:18:52.234 ************************************ 00:18:52.234 END TEST even_2G_alloc 00:18:52.234 ************************************ 00:18:52.234 19:12:07 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:18:52.234 19:12:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:52.234 19:12:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:52.234 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:18:52.234 ************************************ 00:18:52.234 START TEST odd_alloc 00:18:52.234 ************************************ 00:18:52.234 19:12:08 -- common/autotest_common.sh@1111 -- # odd_alloc 00:18:52.234 19:12:08 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:18:52.234 19:12:08 -- setup/hugepages.sh@49 -- # local size=2098176 00:18:52.234 19:12:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:18:52.234 19:12:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:52.234 19:12:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:18:52.234 19:12:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:18:52.234 19:12:08 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:18:52.234 19:12:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:18:52.234 19:12:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:18:52.234 19:12:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:18:52.234 19:12:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:52.234 19:12:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:52.234 19:12:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:18:52.234 19:12:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:18:52.234 19:12:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:52.234 19:12:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:18:52.234 19:12:08 -- setup/hugepages.sh@83 -- # : 0 00:18:52.234 19:12:08 -- setup/hugepages.sh@84 -- # : 0 00:18:52.234 19:12:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:52.234 19:12:08 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:18:52.234 19:12:08 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:18:52.234 19:12:08 -- setup/hugepages.sh@160 -- # setup output 00:18:52.234 19:12:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:52.234 19:12:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:52.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:18:52.493 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:53.064 19:12:08 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:18:53.064 19:12:08 -- setup/hugepages.sh@89 -- # local node 00:18:53.064 19:12:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:18:53.064 19:12:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:18:53.064 19:12:08 -- setup/hugepages.sh@92 -- # local surp 00:18:53.064 19:12:08 -- setup/hugepages.sh@93 -- # local resv 00:18:53.064 19:12:08 -- setup/hugepages.sh@94 -- # local anon 00:18:53.064 19:12:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:53.064 19:12:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:53.064 19:12:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:53.064 19:12:08 -- setup/common.sh@18 -- # local node= 00:18:53.064 19:12:08 -- setup/common.sh@19 -- # local var val 00:18:53.064 19:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:18:53.064 19:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:53.064 19:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:53.064 19:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:53.064 19:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:18:53.064 19:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4704744 kB' 'MemAvailable: 9433928 kB' 'Buffers: 38132 kB' 'Cached: 4802932 kB' 'SwapCached: 0 kB' 'Active: 1493696 kB' 'Inactive: 3535268 kB' 'Active(anon): 196920 kB' 'Inactive(anon): 1816 kB' 'Active(file): 1296776 kB' 'Inactive(file): 3533452 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209608 kB' 'Writeback: 0 kB' 'AnonPages: 206404 kB' 'Mapped: 129900 kB' 'Shmem: 2636 kB' 'KReclaimable: 220820 kB' 'Slab: 320440 kB' 'SReclaimable: 220820 kB' 'SUnreclaim: 99620 kB' 'KernelStack: 4864 kB' 'PageTables: 4432 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 835424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14668 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.064 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.064 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.065 19:12:08 -- setup/common.sh@33 -- # echo 0 00:18:53.065 19:12:08 -- setup/common.sh@33 -- # return 0 00:18:53.065 19:12:08 -- setup/hugepages.sh@97 -- # anon=0 00:18:53.065 19:12:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:53.065 19:12:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:53.065 19:12:08 -- setup/common.sh@18 -- # local node= 00:18:53.065 19:12:08 -- setup/common.sh@19 -- # local var val 00:18:53.065 19:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:18:53.065 19:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:53.065 19:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:53.065 19:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:53.065 19:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:18:53.065 19:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705004 kB' 'MemAvailable: 9434188 kB' 'Buffers: 38132 kB' 'Cached: 4802932 kB' 'SwapCached: 0 kB' 'Active: 1493696 kB' 'Inactive: 3535268 kB' 'Active(anon): 196920 kB' 'Inactive(anon): 1816 kB' 'Active(file): 1296776 kB' 'Inactive(file): 3533452 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209608 kB' 'Writeback: 0 kB' 'AnonPages: 206664 kB' 'Mapped: 129900 kB' 'Shmem: 2636 kB' 'KReclaimable: 220820 kB' 'Slab: 320440 kB' 'SReclaimable: 220820 kB' 'SUnreclaim: 99620 kB' 'KernelStack: 4864 kB' 'PageTables: 4432 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 830048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14668 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.065 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.065 19:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.066 19:12:08 -- setup/common.sh@33 -- # echo 0 00:18:53.066 19:12:08 -- setup/common.sh@33 -- # return 0 00:18:53.066 19:12:08 -- setup/hugepages.sh@99 -- # surp=0 00:18:53.066 19:12:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:53.066 19:12:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:53.066 19:12:08 -- setup/common.sh@18 -- # local node= 00:18:53.066 19:12:08 -- setup/common.sh@19 -- # local var val 00:18:53.066 19:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:18:53.066 19:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:53.066 19:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:53.066 19:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:53.066 19:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:18:53.066 19:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705232 kB' 'MemAvailable: 9434416 kB' 'Buffers: 38132 kB' 'Cached: 4802932 kB' 'SwapCached: 0 kB' 'Active: 1493448 kB' 'Inactive: 3535268 kB' 'Active(anon): 196672 kB' 'Inactive(anon): 1816 kB' 'Active(file): 1296776 kB' 'Inactive(file): 3533452 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209608 kB' 'Writeback: 0 kB' 'AnonPages: 206096 kB' 'Mapped: 129900 kB' 'Shmem: 2636 kB' 'KReclaimable: 220820 kB' 'Slab: 320664 kB' 'SReclaimable: 220820 kB' 'SUnreclaim: 99844 kB' 'KernelStack: 4800 kB' 'PageTables: 4496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 834588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14668 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.066 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.066 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.067 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.067 19:12:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:53.067 19:12:08 -- setup/common.sh@33 -- # echo 0 00:18:53.067 19:12:08 -- setup/common.sh@33 -- # return 0 00:18:53.067 nr_hugepages=1025 00:18:53.067 resv_hugepages=0 00:18:53.067 surplus_hugepages=0 00:18:53.067 anon_hugepages=0 00:18:53.067 19:12:08 -- setup/hugepages.sh@100 -- # resv=0 00:18:53.067 19:12:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:18:53.067 19:12:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:53.067 19:12:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:53.067 19:12:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:53.067 19:12:08 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:18:53.067 19:12:08 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:18:53.067 19:12:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:53.067 19:12:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:53.067 19:12:08 -- setup/common.sh@18 -- # local node= 00:18:53.067 19:12:08 -- setup/common.sh@19 -- # local var val 00:18:53.067 19:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:18:53.067 19:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:53.068 19:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:53.068 19:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:53.068 19:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:18:53.068 19:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705044 kB' 'MemAvailable: 9434192 kB' 'Buffers: 38132 kB' 'Cached: 4802924 kB' 'SwapCached: 0 kB' 'Active: 1493932 kB' 'Inactive: 3535260 kB' 'Active(anon): 197156 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296776 kB' 'Inactive(file): 3533452 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209656 kB' 'Writeback: 0 kB' 'AnonPages: 206820 kB' 'Mapped: 129904 kB' 'Shmem: 2628 kB' 'KReclaimable: 220784 kB' 'Slab: 320548 kB' 'SReclaimable: 220784 kB' 'SUnreclaim: 99764 kB' 'KernelStack: 4820 kB' 'PageTables: 4412 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 833292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14684 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.068 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.068 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.069 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.069 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # continue 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.328 19:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.328 19:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:53.328 19:12:08 -- setup/common.sh@33 -- # echo 1025 00:18:53.328 19:12:08 -- setup/common.sh@33 -- # return 0 00:18:53.328 19:12:08 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:18:53.328 19:12:08 -- setup/hugepages.sh@112 -- # get_nodes 00:18:53.328 19:12:08 -- setup/hugepages.sh@27 -- # local node 00:18:53.328 19:12:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:53.328 19:12:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:18:53.328 19:12:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:18:53.328 19:12:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:53.328 19:12:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:53.328 19:12:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:53.328 19:12:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:53.328 19:12:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:53.328 19:12:08 -- setup/common.sh@18 -- # local node=0 00:18:53.328 19:12:08 -- setup/common.sh@19 -- # local var val 00:18:53.328 19:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:18:53.328 19:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:53.328 19:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:53.328 19:12:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:53.328 19:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:18:53.328 19:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:53.328 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.328 19:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705324 kB' 'MemUsed: 7545780 kB' 'Active: 1493544 kB' 'Inactive: 3535260 kB' 'Active(anon): 196768 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296776 kB' 'Inactive(file): 3533452 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 209656 kB' 'Writeback: 0 kB' 'FilePages: 4841056 kB' 'Mapped: 129904 kB' 'AnonPages: 206208 kB' 'Shmem: 2628 kB' 'KernelStack: 4784 kB' 'PageTables: 4464 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220784 kB' 'Slab: 320564 kB' 'SReclaimable: 220784 kB' 'SUnreclaim: 99780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:18:53.328 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.328 19:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.328 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.328 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.328 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.328 19:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.328 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.328 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.328 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.328 19:12:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.328 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.328 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.329 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.329 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:53.329 19:12:09 -- setup/common.sh@33 -- # echo 0 00:18:53.329 19:12:09 -- setup/common.sh@33 -- # return 0 00:18:53.329 19:12:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:53.329 19:12:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:53.329 19:12:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:53.329 19:12:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:53.329 19:12:09 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:18:53.329 node0=1025 expecting 1025 00:18:53.329 19:12:09 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:18:53.329 00:18:53.329 real 0m1.011s 00:18:53.329 user 0m0.276s 00:18:53.329 sys 0m0.700s 00:18:53.329 19:12:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:53.329 19:12:09 -- common/autotest_common.sh@10 -- # set +x 00:18:53.329 ************************************ 00:18:53.329 END TEST odd_alloc 00:18:53.329 ************************************ 00:18:53.329 19:12:09 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:18:53.329 19:12:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:53.329 19:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:53.329 19:12:09 -- common/autotest_common.sh@10 -- # set +x 00:18:53.329 ************************************ 00:18:53.329 START TEST custom_alloc 00:18:53.329 ************************************ 00:18:53.329 19:12:09 -- common/autotest_common.sh@1111 -- # custom_alloc 00:18:53.329 19:12:09 -- setup/hugepages.sh@167 -- # local IFS=, 00:18:53.329 19:12:09 -- setup/hugepages.sh@169 -- # local node 00:18:53.329 19:12:09 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:18:53.329 19:12:09 -- setup/hugepages.sh@170 -- # local nodes_hp 00:18:53.329 19:12:09 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:18:53.329 19:12:09 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:18:53.329 19:12:09 -- setup/hugepages.sh@49 -- # local size=1048576 00:18:53.329 19:12:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:18:53.329 19:12:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:53.329 19:12:09 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:18:53.329 19:12:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:18:53.329 19:12:09 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:18:53.329 19:12:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:18:53.329 19:12:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:18:53.329 19:12:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:18:53.329 19:12:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:53.329 19:12:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:53.329 19:12:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:18:53.329 19:12:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:18:53.329 19:12:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:53.330 19:12:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:18:53.330 19:12:09 -- setup/hugepages.sh@83 -- # : 0 00:18:53.330 19:12:09 -- setup/hugepages.sh@84 -- # : 0 00:18:53.330 19:12:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:53.330 19:12:09 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:18:53.330 19:12:09 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:18:53.330 19:12:09 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:18:53.330 19:12:09 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:18:53.330 19:12:09 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:18:53.330 19:12:09 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:18:53.330 19:12:09 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:18:53.330 19:12:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:18:53.330 19:12:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:18:53.330 19:12:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:18:53.330 19:12:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:53.330 19:12:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:53.330 19:12:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:18:53.330 19:12:09 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:18:53.330 19:12:09 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:18:53.330 19:12:09 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:18:53.330 19:12:09 -- setup/hugepages.sh@78 -- # return 0 00:18:53.330 19:12:09 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:18:53.330 19:12:09 -- setup/hugepages.sh@187 -- # setup output 00:18:53.330 19:12:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:53.330 19:12:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:53.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:18:53.588 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:53.848 19:12:09 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:18:53.848 19:12:09 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:18:53.848 19:12:09 -- setup/hugepages.sh@89 -- # local node 00:18:53.848 19:12:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:18:53.848 19:12:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:18:53.848 19:12:09 -- setup/hugepages.sh@92 -- # local surp 00:18:53.848 19:12:09 -- setup/hugepages.sh@93 -- # local resv 00:18:53.848 19:12:09 -- setup/hugepages.sh@94 -- # local anon 00:18:53.848 19:12:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:53.848 19:12:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:53.848 19:12:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:53.848 19:12:09 -- setup/common.sh@18 -- # local node= 00:18:53.848 19:12:09 -- setup/common.sh@19 -- # local var val 00:18:53.848 19:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:18:53.848 19:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:53.848 19:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:53.848 19:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:53.848 19:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:18:53.848 19:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5751908 kB' 'MemAvailable: 10481056 kB' 'Buffers: 38132 kB' 'Cached: 4802924 kB' 'SwapCached: 0 kB' 'Active: 1495364 kB' 'Inactive: 3535248 kB' 'Active(anon): 198576 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533440 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209700 kB' 'Writeback: 0 kB' 'AnonPages: 207928 kB' 'Mapped: 129916 kB' 'Shmem: 2628 kB' 'KReclaimable: 220784 kB' 'Slab: 320596 kB' 'SReclaimable: 220784 kB' 'SUnreclaim: 99812 kB' 'KernelStack: 4908 kB' 'PageTables: 4272 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 836520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14684 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.848 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.848 19:12:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.849 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.849 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.849 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.849 19:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.849 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.849 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.849 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.849 19:12:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.849 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.849 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.849 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:53.849 19:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:53.849 19:12:09 -- setup/common.sh@32 -- # continue 00:18:53.849 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:53.849 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.110 19:12:09 -- setup/common.sh@33 -- # echo 0 00:18:54.110 19:12:09 -- setup/common.sh@33 -- # return 0 00:18:54.110 19:12:09 -- setup/hugepages.sh@97 -- # anon=0 00:18:54.110 19:12:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:54.110 19:12:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:54.110 19:12:09 -- setup/common.sh@18 -- # local node= 00:18:54.110 19:12:09 -- setup/common.sh@19 -- # local var val 00:18:54.110 19:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:18:54.110 19:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:54.110 19:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:54.110 19:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:54.110 19:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:18:54.110 19:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5752176 kB' 'MemAvailable: 10481324 kB' 'Buffers: 38132 kB' 'Cached: 4802924 kB' 'SwapCached: 0 kB' 'Active: 1495412 kB' 'Inactive: 3535248 kB' 'Active(anon): 198624 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533440 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209700 kB' 'Writeback: 0 kB' 'AnonPages: 207744 kB' 'Mapped: 129880 kB' 'Shmem: 2628 kB' 'KReclaimable: 220784 kB' 'Slab: 320596 kB' 'SReclaimable: 220784 kB' 'SUnreclaim: 99812 kB' 'KernelStack: 4892 kB' 'PageTables: 4252 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 836520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14684 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.110 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.110 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.111 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.111 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.112 19:12:09 -- setup/common.sh@33 -- # echo 0 00:18:54.112 19:12:09 -- setup/common.sh@33 -- # return 0 00:18:54.112 19:12:09 -- setup/hugepages.sh@99 -- # surp=0 00:18:54.112 19:12:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:54.112 19:12:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:54.112 19:12:09 -- setup/common.sh@18 -- # local node= 00:18:54.112 19:12:09 -- setup/common.sh@19 -- # local var val 00:18:54.112 19:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:18:54.112 19:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:54.112 19:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:54.112 19:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:54.112 19:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:18:54.112 19:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5752452 kB' 'MemAvailable: 10481600 kB' 'Buffers: 38132 kB' 'Cached: 4802924 kB' 'SwapCached: 0 kB' 'Active: 1495052 kB' 'Inactive: 3535248 kB' 'Active(anon): 198264 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533440 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209700 kB' 'Writeback: 0 kB' 'AnonPages: 207740 kB' 'Mapped: 129880 kB' 'Shmem: 2628 kB' 'KReclaimable: 220784 kB' 'Slab: 320596 kB' 'SReclaimable: 220784 kB' 'SUnreclaim: 99812 kB' 'KernelStack: 4812 kB' 'PageTables: 4116 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 841368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14684 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.112 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.112 19:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.113 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.113 19:12:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.114 19:12:09 -- setup/common.sh@33 -- # echo 0 00:18:54.114 19:12:09 -- setup/common.sh@33 -- # return 0 00:18:54.114 nr_hugepages=512 00:18:54.114 resv_hugepages=0 00:18:54.114 surplus_hugepages=0 00:18:54.114 anon_hugepages=0 00:18:54.114 19:12:09 -- setup/hugepages.sh@100 -- # resv=0 00:18:54.114 19:12:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:18:54.114 19:12:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:54.114 19:12:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:54.114 19:12:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:54.114 19:12:09 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:18:54.114 19:12:09 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:18:54.114 19:12:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:54.114 19:12:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:54.114 19:12:09 -- setup/common.sh@18 -- # local node= 00:18:54.114 19:12:09 -- setup/common.sh@19 -- # local var val 00:18:54.114 19:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:18:54.114 19:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:54.114 19:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:54.114 19:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:54.114 19:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:18:54.114 19:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5752396 kB' 'MemAvailable: 10481544 kB' 'Buffers: 38132 kB' 'Cached: 4802924 kB' 'SwapCached: 0 kB' 'Active: 1495312 kB' 'Inactive: 3535248 kB' 'Active(anon): 198524 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533440 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209700 kB' 'Writeback: 0 kB' 'AnonPages: 208000 kB' 'Mapped: 129880 kB' 'Shmem: 2628 kB' 'KReclaimable: 220784 kB' 'Slab: 320596 kB' 'SReclaimable: 220784 kB' 'SUnreclaim: 99812 kB' 'KernelStack: 4880 kB' 'PageTables: 4116 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 840032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14700 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.114 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.114 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.115 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.115 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:54.116 19:12:09 -- setup/common.sh@33 -- # echo 512 00:18:54.116 19:12:09 -- setup/common.sh@33 -- # return 0 00:18:54.116 19:12:09 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:18:54.116 19:12:09 -- setup/hugepages.sh@112 -- # get_nodes 00:18:54.116 19:12:09 -- setup/hugepages.sh@27 -- # local node 00:18:54.116 19:12:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:54.116 19:12:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:18:54.116 19:12:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:18:54.116 19:12:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:54.116 19:12:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:54.116 19:12:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:54.116 19:12:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:54.116 19:12:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:54.116 19:12:09 -- setup/common.sh@18 -- # local node=0 00:18:54.116 19:12:09 -- setup/common.sh@19 -- # local var val 00:18:54.116 19:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:18:54.116 19:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:54.116 19:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:54.116 19:12:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:54.116 19:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:18:54.116 19:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5752656 kB' 'MemUsed: 6498448 kB' 'Active: 1495580 kB' 'Inactive: 3535248 kB' 'Active(anon): 198792 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533440 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 209700 kB' 'Writeback: 0 kB' 'FilePages: 4841056 kB' 'Mapped: 129880 kB' 'AnonPages: 208300 kB' 'Shmem: 2628 kB' 'KernelStack: 4880 kB' 'PageTables: 4120 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220784 kB' 'Slab: 320596 kB' 'SReclaimable: 220784 kB' 'SUnreclaim: 99812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.116 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.116 19:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # continue 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.117 19:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.117 19:12:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.117 19:12:09 -- setup/common.sh@33 -- # echo 0 00:18:54.117 19:12:09 -- setup/common.sh@33 -- # return 0 00:18:54.117 19:12:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:54.117 19:12:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:54.117 19:12:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:54.117 19:12:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:54.117 19:12:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:18:54.117 node0=512 expecting 512 00:18:54.117 19:12:09 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:18:54.117 00:18:54.117 real 0m0.787s 00:18:54.117 user 0m0.285s 00:18:54.117 sys 0m0.467s 00:18:54.117 19:12:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:54.117 19:12:09 -- common/autotest_common.sh@10 -- # set +x 00:18:54.117 ************************************ 00:18:54.117 END TEST custom_alloc 00:18:54.117 ************************************ 00:18:54.117 19:12:09 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:18:54.117 19:12:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:54.117 19:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:54.117 19:12:09 -- common/autotest_common.sh@10 -- # set +x 00:18:54.117 ************************************ 00:18:54.117 START TEST no_shrink_alloc 00:18:54.117 ************************************ 00:18:54.117 19:12:09 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:18:54.117 19:12:09 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:18:54.117 19:12:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:18:54.117 19:12:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:18:54.117 19:12:10 -- setup/hugepages.sh@51 -- # shift 00:18:54.117 19:12:10 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:18:54.117 19:12:10 -- setup/hugepages.sh@52 -- # local node_ids 00:18:54.117 19:12:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:54.117 19:12:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:18:54.117 19:12:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:18:54.118 19:12:10 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:18:54.118 19:12:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:18:54.118 19:12:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:18:54.118 19:12:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:18:54.118 19:12:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:54.118 19:12:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:54.118 19:12:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:18:54.118 19:12:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:18:54.118 19:12:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:18:54.118 19:12:10 -- setup/hugepages.sh@73 -- # return 0 00:18:54.118 19:12:10 -- setup/hugepages.sh@198 -- # setup output 00:18:54.118 19:12:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:54.118 19:12:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:54.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:18:54.725 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:54.984 19:12:10 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:18:54.984 19:12:10 -- setup/hugepages.sh@89 -- # local node 00:18:54.984 19:12:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:18:54.984 19:12:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:18:54.984 19:12:10 -- setup/hugepages.sh@92 -- # local surp 00:18:54.984 19:12:10 -- setup/hugepages.sh@93 -- # local resv 00:18:54.984 19:12:10 -- setup/hugepages.sh@94 -- # local anon 00:18:54.984 19:12:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:54.985 19:12:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:54.985 19:12:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:54.985 19:12:10 -- setup/common.sh@18 -- # local node= 00:18:54.985 19:12:10 -- setup/common.sh@19 -- # local var val 00:18:54.985 19:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:18:54.985 19:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:54.985 19:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:54.985 19:12:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:54.985 19:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:18:54.985 19:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705240 kB' 'MemAvailable: 9434548 kB' 'Buffers: 38132 kB' 'Cached: 4803052 kB' 'SwapCached: 0 kB' 'Active: 1495052 kB' 'Inactive: 3535376 kB' 'Active(anon): 198264 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533568 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 207692 kB' 'Mapped: 129912 kB' 'Shmem: 2628 kB' 'KReclaimable: 220816 kB' 'Slab: 320240 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99424 kB' 'KernelStack: 4792 kB' 'PageTables: 4564 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 831000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14668 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.985 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.985 19:12:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:54.986 19:12:10 -- setup/common.sh@33 -- # echo 0 00:18:54.986 19:12:10 -- setup/common.sh@33 -- # return 0 00:18:54.986 19:12:10 -- setup/hugepages.sh@97 -- # anon=0 00:18:54.986 19:12:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:54.986 19:12:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:54.986 19:12:10 -- setup/common.sh@18 -- # local node= 00:18:54.986 19:12:10 -- setup/common.sh@19 -- # local var val 00:18:54.986 19:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:18:54.986 19:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:54.986 19:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:54.986 19:12:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:54.986 19:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:18:54.986 19:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705240 kB' 'MemAvailable: 9434548 kB' 'Buffers: 38132 kB' 'Cached: 4803052 kB' 'SwapCached: 0 kB' 'Active: 1495028 kB' 'Inactive: 3535376 kB' 'Active(anon): 198240 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533568 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 207928 kB' 'Mapped: 129912 kB' 'Shmem: 2628 kB' 'KReclaimable: 220816 kB' 'Slab: 320240 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99424 kB' 'KernelStack: 4776 kB' 'PageTables: 4536 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 831000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14668 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.986 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.986 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:54.987 19:12:10 -- setup/common.sh@33 -- # echo 0 00:18:54.987 19:12:10 -- setup/common.sh@33 -- # return 0 00:18:54.987 19:12:10 -- setup/hugepages.sh@99 -- # surp=0 00:18:54.987 19:12:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:54.987 19:12:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:54.987 19:12:10 -- setup/common.sh@18 -- # local node= 00:18:54.987 19:12:10 -- setup/common.sh@19 -- # local var val 00:18:54.987 19:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:18:54.987 19:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:54.987 19:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:54.987 19:12:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:54.987 19:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:18:54.987 19:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705476 kB' 'MemAvailable: 9434784 kB' 'Buffers: 38132 kB' 'Cached: 4803052 kB' 'SwapCached: 0 kB' 'Active: 1495136 kB' 'Inactive: 3535376 kB' 'Active(anon): 198348 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533568 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 207752 kB' 'Mapped: 129912 kB' 'Shmem: 2628 kB' 'KReclaimable: 220816 kB' 'Slab: 320272 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99456 kB' 'KernelStack: 4800 kB' 'PageTables: 4396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 831000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14668 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:54.987 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:54.987 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.249 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.249 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.250 19:12:10 -- setup/common.sh@33 -- # echo 0 00:18:55.250 19:12:10 -- setup/common.sh@33 -- # return 0 00:18:55.250 nr_hugepages=1024 00:18:55.250 resv_hugepages=0 00:18:55.250 surplus_hugepages=0 00:18:55.250 anon_hugepages=0 00:18:55.250 19:12:10 -- setup/hugepages.sh@100 -- # resv=0 00:18:55.250 19:12:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:18:55.250 19:12:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:55.250 19:12:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:55.250 19:12:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:55.250 19:12:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:55.250 19:12:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:18:55.250 19:12:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:55.250 19:12:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:55.250 19:12:10 -- setup/common.sh@18 -- # local node= 00:18:55.250 19:12:10 -- setup/common.sh@19 -- # local var val 00:18:55.250 19:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:18:55.250 19:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:55.250 19:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:55.250 19:12:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:55.250 19:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:18:55.250 19:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705752 kB' 'MemAvailable: 9435060 kB' 'Buffers: 38132 kB' 'Cached: 4803052 kB' 'SwapCached: 0 kB' 'Active: 1495540 kB' 'Inactive: 3535376 kB' 'Active(anon): 198752 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533568 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 207960 kB' 'Mapped: 129912 kB' 'Shmem: 2628 kB' 'KReclaimable: 220816 kB' 'Slab: 320272 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99456 kB' 'KernelStack: 4816 kB' 'PageTables: 4436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 828984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14668 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.250 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.250 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.251 19:12:10 -- setup/common.sh@33 -- # echo 1024 00:18:55.251 19:12:10 -- setup/common.sh@33 -- # return 0 00:18:55.251 19:12:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:55.251 19:12:10 -- setup/hugepages.sh@112 -- # get_nodes 00:18:55.251 19:12:10 -- setup/hugepages.sh@27 -- # local node 00:18:55.251 19:12:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:55.251 19:12:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:18:55.251 19:12:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:18:55.251 19:12:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:55.251 19:12:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:55.251 19:12:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:55.251 19:12:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:55.251 19:12:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:55.251 19:12:10 -- setup/common.sh@18 -- # local node=0 00:18:55.251 19:12:10 -- setup/common.sh@19 -- # local var val 00:18:55.251 19:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:18:55.251 19:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:55.251 19:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:55.251 19:12:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:55.251 19:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:18:55.251 19:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4706012 kB' 'MemUsed: 7545092 kB' 'Active: 1495000 kB' 'Inactive: 3535376 kB' 'Active(anon): 198212 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533568 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'FilePages: 4841184 kB' 'Mapped: 129912 kB' 'AnonPages: 207660 kB' 'Shmem: 2628 kB' 'KernelStack: 4816 kB' 'PageTables: 4432 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220816 kB' 'Slab: 320272 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.251 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.251 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # continue 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.252 19:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.252 19:12:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.252 19:12:10 -- setup/common.sh@33 -- # echo 0 00:18:55.252 19:12:10 -- setup/common.sh@33 -- # return 0 00:18:55.252 node0=1024 expecting 1024 00:18:55.252 19:12:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:55.252 19:12:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:55.252 19:12:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:55.252 19:12:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:55.252 19:12:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:18:55.252 19:12:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:18:55.252 19:12:10 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:18:55.252 19:12:10 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:18:55.252 19:12:10 -- setup/hugepages.sh@202 -- # setup output 00:18:55.252 19:12:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:55.252 19:12:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:55.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:18:55.512 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:55.512 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:18:55.512 19:12:11 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:18:55.512 19:12:11 -- setup/hugepages.sh@89 -- # local node 00:18:55.512 19:12:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:18:55.512 19:12:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:18:55.512 19:12:11 -- setup/hugepages.sh@92 -- # local surp 00:18:55.512 19:12:11 -- setup/hugepages.sh@93 -- # local resv 00:18:55.512 19:12:11 -- setup/hugepages.sh@94 -- # local anon 00:18:55.512 19:12:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:55.512 19:12:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:55.512 19:12:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:55.512 19:12:11 -- setup/common.sh@18 -- # local node= 00:18:55.512 19:12:11 -- setup/common.sh@19 -- # local var val 00:18:55.512 19:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:18:55.512 19:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:55.512 19:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:55.512 19:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:55.512 19:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:18:55.512 19:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705072 kB' 'MemAvailable: 9434384 kB' 'Buffers: 38132 kB' 'Cached: 4803056 kB' 'SwapCached: 0 kB' 'Active: 1495556 kB' 'Inactive: 3535380 kB' 'Active(anon): 198768 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533572 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 208468 kB' 'Mapped: 130172 kB' 'Shmem: 2628 kB' 'KReclaimable: 220816 kB' 'Slab: 320580 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99764 kB' 'KernelStack: 4824 kB' 'PageTables: 4636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 830684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14636 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.512 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.512 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:55.513 19:12:11 -- setup/common.sh@33 -- # echo 0 00:18:55.513 19:12:11 -- setup/common.sh@33 -- # return 0 00:18:55.513 19:12:11 -- setup/hugepages.sh@97 -- # anon=0 00:18:55.513 19:12:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:55.513 19:12:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:55.513 19:12:11 -- setup/common.sh@18 -- # local node= 00:18:55.513 19:12:11 -- setup/common.sh@19 -- # local var val 00:18:55.513 19:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:18:55.513 19:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:55.513 19:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:55.513 19:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:55.513 19:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:18:55.513 19:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705340 kB' 'MemAvailable: 9434652 kB' 'Buffers: 38132 kB' 'Cached: 4803056 kB' 'SwapCached: 0 kB' 'Active: 1495532 kB' 'Inactive: 3535380 kB' 'Active(anon): 198744 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533572 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 208316 kB' 'Mapped: 130172 kB' 'Shmem: 2628 kB' 'KReclaimable: 220816 kB' 'Slab: 320580 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99764 kB' 'KernelStack: 4808 kB' 'PageTables: 4612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 830684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14636 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.513 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.513 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.514 19:12:11 -- setup/common.sh@33 -- # echo 0 00:18:55.514 19:12:11 -- setup/common.sh@33 -- # return 0 00:18:55.514 19:12:11 -- setup/hugepages.sh@99 -- # surp=0 00:18:55.514 19:12:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:55.514 19:12:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:55.514 19:12:11 -- setup/common.sh@18 -- # local node= 00:18:55.514 19:12:11 -- setup/common.sh@19 -- # local var val 00:18:55.514 19:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:18:55.514 19:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:55.514 19:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:55.514 19:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:55.514 19:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:18:55.514 19:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705396 kB' 'MemAvailable: 9434708 kB' 'Buffers: 38132 kB' 'Cached: 4803056 kB' 'SwapCached: 0 kB' 'Active: 1495320 kB' 'Inactive: 3535380 kB' 'Active(anon): 198532 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533572 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 208044 kB' 'Mapped: 130008 kB' 'Shmem: 2628 kB' 'KReclaimable: 220816 kB' 'Slab: 320580 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99764 kB' 'KernelStack: 4756 kB' 'PageTables: 4600 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 835884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14652 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.514 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.514 19:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.776 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.776 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:55.777 19:12:11 -- setup/common.sh@33 -- # echo 0 00:18:55.777 19:12:11 -- setup/common.sh@33 -- # return 0 00:18:55.777 nr_hugepages=1024 00:18:55.777 resv_hugepages=0 00:18:55.777 surplus_hugepages=0 00:18:55.777 anon_hugepages=0 00:18:55.777 19:12:11 -- setup/hugepages.sh@100 -- # resv=0 00:18:55.777 19:12:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:18:55.777 19:12:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:55.777 19:12:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:55.777 19:12:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:55.777 19:12:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:55.777 19:12:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:18:55.777 19:12:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:55.777 19:12:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:55.777 19:12:11 -- setup/common.sh@18 -- # local node= 00:18:55.777 19:12:11 -- setup/common.sh@19 -- # local var val 00:18:55.777 19:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:18:55.777 19:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:55.777 19:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:55.777 19:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:55.777 19:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:18:55.777 19:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705704 kB' 'MemAvailable: 9435016 kB' 'Buffers: 38132 kB' 'Cached: 4803056 kB' 'SwapCached: 0 kB' 'Active: 1495256 kB' 'Inactive: 3535380 kB' 'Active(anon): 198468 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533572 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'AnonPages: 207656 kB' 'Mapped: 129960 kB' 'Shmem: 2628 kB' 'KReclaimable: 220816 kB' 'Slab: 320580 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99764 kB' 'KernelStack: 4800 kB' 'PageTables: 4356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 834252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14652 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.777 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.777 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.778 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.778 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:55.778 19:12:11 -- setup/common.sh@33 -- # echo 1024 00:18:55.778 19:12:11 -- setup/common.sh@33 -- # return 0 00:18:55.778 19:12:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:55.778 19:12:11 -- setup/hugepages.sh@112 -- # get_nodes 00:18:55.778 19:12:11 -- setup/hugepages.sh@27 -- # local node 00:18:55.778 19:12:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:55.778 19:12:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:18:55.778 19:12:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:18:55.778 19:12:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:55.778 19:12:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:55.778 19:12:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:55.778 19:12:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:55.778 19:12:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:55.778 19:12:11 -- setup/common.sh@18 -- # local node=0 00:18:55.778 19:12:11 -- setup/common.sh@19 -- # local var val 00:18:55.778 19:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:18:55.778 19:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:55.778 19:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:55.778 19:12:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:55.778 19:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:18:55.779 19:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4705972 kB' 'MemUsed: 7545132 kB' 'Active: 1494984 kB' 'Inactive: 3535380 kB' 'Active(anon): 198196 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1296788 kB' 'Inactive(file): 3533572 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 209744 kB' 'Writeback: 0 kB' 'FilePages: 4841188 kB' 'Mapped: 129912 kB' 'AnonPages: 207536 kB' 'Shmem: 2628 kB' 'KernelStack: 4804 kB' 'PageTables: 4328 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220816 kB' 'Slab: 320704 kB' 'SReclaimable: 220816 kB' 'SUnreclaim: 99888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # continue 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:18:55.779 19:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:18:55.779 19:12:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:55.779 19:12:11 -- setup/common.sh@33 -- # echo 0 00:18:55.779 19:12:11 -- setup/common.sh@33 -- # return 0 00:18:55.779 node0=1024 expecting 1024 00:18:55.779 ************************************ 00:18:55.779 END TEST no_shrink_alloc 00:18:55.779 ************************************ 00:18:55.779 19:12:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:55.779 19:12:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:55.779 19:12:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:55.779 19:12:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:55.779 19:12:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:18:55.779 19:12:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:18:55.779 00:18:55.779 real 0m1.530s 00:18:55.779 user 0m0.543s 00:18:55.779 sys 0m0.935s 00:18:55.779 19:12:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:55.779 19:12:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.779 19:12:11 -- setup/hugepages.sh@217 -- # clear_hp 00:18:55.779 19:12:11 -- setup/hugepages.sh@37 -- # local node hp 00:18:55.779 19:12:11 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:18:55.780 19:12:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:55.780 19:12:11 -- setup/hugepages.sh@41 -- # echo 0 00:18:55.780 19:12:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:55.780 19:12:11 -- setup/hugepages.sh@41 -- # echo 0 00:18:55.780 19:12:11 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:18:55.780 19:12:11 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:18:55.780 00:18:55.780 real 0m7.061s 00:18:55.780 user 0m2.286s 00:18:55.780 sys 0m4.447s 00:18:55.780 19:12:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:55.780 19:12:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.780 ************************************ 00:18:55.780 END TEST hugepages 00:18:55.780 ************************************ 00:18:55.780 19:12:11 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:18:55.780 19:12:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:55.780 19:12:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:55.780 19:12:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.780 ************************************ 00:18:55.780 START TEST driver 00:18:55.780 ************************************ 00:18:55.780 19:12:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:18:56.039 * Looking for test storage... 00:18:56.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:18:56.039 19:12:11 -- setup/driver.sh@68 -- # setup reset 00:18:56.039 19:12:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:56.039 19:12:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:56.607 19:12:12 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:18:56.607 19:12:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:56.607 19:12:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.607 19:12:12 -- common/autotest_common.sh@10 -- # set +x 00:18:56.607 ************************************ 00:18:56.607 START TEST guess_driver 00:18:56.607 ************************************ 00:18:56.607 19:12:12 -- common/autotest_common.sh@1111 -- # guess_driver 00:18:56.607 19:12:12 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:18:56.607 19:12:12 -- setup/driver.sh@47 -- # local fail=0 00:18:56.607 19:12:12 -- setup/driver.sh@49 -- # pick_driver 00:18:56.607 19:12:12 -- setup/driver.sh@36 -- # vfio 00:18:56.607 19:12:12 -- setup/driver.sh@21 -- # local iommu_grups 00:18:56.607 19:12:12 -- setup/driver.sh@22 -- # local unsafe_vfio 00:18:56.607 19:12:12 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:18:56.607 19:12:12 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:18:56.607 19:12:12 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:18:56.607 19:12:12 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:18:56.607 19:12:12 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:18:56.607 19:12:12 -- setup/driver.sh@32 -- # return 1 00:18:56.607 19:12:12 -- setup/driver.sh@38 -- # uio 00:18:56.607 19:12:12 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:18:56.607 19:12:12 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:18:56.607 19:12:12 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:18:56.607 19:12:12 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:18:56.607 19:12:12 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:18:56.607 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:18:56.607 19:12:12 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:18:56.607 19:12:12 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:18:56.607 19:12:12 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:18:56.607 19:12:12 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:18:56.607 Looking for driver=uio_pci_generic 00:18:56.607 19:12:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:56.607 19:12:12 -- setup/driver.sh@45 -- # setup output config 00:18:56.607 19:12:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:18:56.607 19:12:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:18:56.865 19:12:12 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:18:56.865 19:12:12 -- setup/driver.sh@58 -- # continue 00:18:56.865 19:12:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:57.132 19:12:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:57.132 19:12:12 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:18:57.132 19:12:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:58.070 19:12:13 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:18:58.071 19:12:13 -- setup/driver.sh@65 -- # setup reset 00:18:58.071 19:12:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:58.071 19:12:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:58.638 ************************************ 00:18:58.638 END TEST guess_driver 00:18:58.638 ************************************ 00:18:58.638 00:18:58.638 real 0m2.057s 00:18:58.638 user 0m0.440s 00:18:58.638 sys 0m1.600s 00:18:58.638 19:12:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:58.638 19:12:14 -- common/autotest_common.sh@10 -- # set +x 00:18:58.638 ************************************ 00:18:58.638 END TEST driver 00:18:58.638 ************************************ 00:18:58.638 00:18:58.638 real 0m2.769s 00:18:58.638 user 0m0.735s 00:18:58.638 sys 0m2.061s 00:18:58.638 19:12:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:58.638 19:12:14 -- common/autotest_common.sh@10 -- # set +x 00:18:58.638 19:12:14 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:18:58.638 19:12:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:58.638 19:12:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:58.638 19:12:14 -- common/autotest_common.sh@10 -- # set +x 00:18:58.638 ************************************ 00:18:58.638 START TEST devices 00:18:58.638 ************************************ 00:18:58.638 19:12:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:18:58.896 * Looking for test storage... 00:18:58.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:18:58.896 19:12:14 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:18:58.896 19:12:14 -- setup/devices.sh@192 -- # setup reset 00:18:58.896 19:12:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:58.896 19:12:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:59.463 19:12:15 -- setup/devices.sh@194 -- # get_zoned_devs 00:18:59.463 19:12:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:18:59.463 19:12:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:18:59.463 19:12:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:18:59.463 19:12:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:59.463 19:12:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:18:59.463 19:12:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:59.463 19:12:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:59.463 19:12:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:59.463 19:12:15 -- setup/devices.sh@196 -- # blocks=() 00:18:59.463 19:12:15 -- setup/devices.sh@196 -- # declare -a blocks 00:18:59.463 19:12:15 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:18:59.463 19:12:15 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:18:59.463 19:12:15 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:18:59.463 19:12:15 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:18:59.463 19:12:15 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:18:59.463 19:12:15 -- setup/devices.sh@201 -- # ctrl=nvme0 00:18:59.463 19:12:15 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:18:59.463 19:12:15 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:18:59.463 19:12:15 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:18:59.463 19:12:15 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:59.463 19:12:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:59.463 No valid GPT data, bailing 00:18:59.463 19:12:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:59.463 19:12:15 -- scripts/common.sh@391 -- # pt= 00:18:59.463 19:12:15 -- scripts/common.sh@392 -- # return 1 00:18:59.463 19:12:15 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:18:59.463 19:12:15 -- setup/common.sh@76 -- # local dev=nvme0n1 00:18:59.463 19:12:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:59.463 19:12:15 -- setup/common.sh@80 -- # echo 5368709120 00:18:59.463 19:12:15 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:18:59.463 19:12:15 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:18:59.463 19:12:15 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:18:59.463 19:12:15 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:18:59.463 19:12:15 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:18:59.463 19:12:15 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:18:59.463 19:12:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:59.463 19:12:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:59.463 19:12:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.463 ************************************ 00:18:59.463 START TEST nvme_mount 00:18:59.463 ************************************ 00:18:59.463 19:12:15 -- common/autotest_common.sh@1111 -- # nvme_mount 00:18:59.463 19:12:15 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:18:59.463 19:12:15 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:18:59.463 19:12:15 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:18:59.463 19:12:15 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:18:59.463 19:12:15 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:18:59.463 19:12:15 -- setup/common.sh@39 -- # local disk=nvme0n1 00:18:59.463 19:12:15 -- setup/common.sh@40 -- # local part_no=1 00:18:59.463 19:12:15 -- setup/common.sh@41 -- # local size=1073741824 00:18:59.463 19:12:15 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:18:59.463 19:12:15 -- setup/common.sh@44 -- # parts=() 00:18:59.463 19:12:15 -- setup/common.sh@44 -- # local parts 00:18:59.463 19:12:15 -- setup/common.sh@46 -- # (( part = 1 )) 00:18:59.463 19:12:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:18:59.463 19:12:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:18:59.463 19:12:15 -- setup/common.sh@46 -- # (( part++ )) 00:18:59.463 19:12:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:18:59.463 19:12:15 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:18:59.463 19:12:15 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:18:59.463 19:12:15 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:19:00.839 Creating new GPT entries in memory. 00:19:00.839 GPT data structures destroyed! You may now partition the disk using fdisk or 00:19:00.839 other utilities. 00:19:00.839 19:12:16 -- setup/common.sh@57 -- # (( part = 1 )) 00:19:00.839 19:12:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:00.839 19:12:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:19:00.839 19:12:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:19:00.839 19:12:16 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:19:02.215 Creating new GPT entries in memory. 00:19:02.215 The operation has completed successfully. 00:19:02.215 19:12:17 -- setup/common.sh@57 -- # (( part++ )) 00:19:02.215 19:12:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:02.215 19:12:17 -- setup/common.sh@62 -- # wait 103476 00:19:02.215 19:12:17 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:02.215 19:12:17 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:19:02.215 19:12:17 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:02.215 19:12:17 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:19:02.215 19:12:17 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:19:02.215 19:12:17 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:02.215 19:12:17 -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:02.215 19:12:17 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:19:02.215 19:12:17 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:19:02.215 19:12:17 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:02.215 19:12:17 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:02.215 19:12:17 -- setup/devices.sh@53 -- # local found=0 00:19:02.215 19:12:17 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:02.215 19:12:17 -- setup/devices.sh@56 -- # : 00:19:02.215 19:12:17 -- setup/devices.sh@59 -- # local pci status 00:19:02.215 19:12:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:02.215 19:12:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:19:02.215 19:12:17 -- setup/devices.sh@47 -- # setup output config 00:19:02.215 19:12:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:02.215 19:12:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:02.215 19:12:18 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:02.215 19:12:18 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:19:02.215 19:12:18 -- setup/devices.sh@63 -- # found=1 00:19:02.215 19:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:02.215 19:12:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:02.215 19:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:02.215 19:12:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:02.215 19:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:03.151 19:12:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:03.151 19:12:19 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:19:03.151 19:12:19 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:03.151 19:12:19 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:03.151 19:12:19 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:03.151 19:12:19 -- setup/devices.sh@110 -- # cleanup_nvme 00:19:03.151 19:12:19 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:03.151 19:12:19 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:03.151 19:12:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:03.151 19:12:19 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:19:03.151 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:19:03.151 19:12:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:19:03.151 19:12:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:19:03.409 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:19:03.409 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:19:03.409 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:19:03.409 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:19:03.409 19:12:19 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:19:03.409 19:12:19 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:19:03.409 19:12:19 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:03.409 19:12:19 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:19:03.409 19:12:19 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:19:03.409 19:12:19 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:03.409 19:12:19 -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:03.409 19:12:19 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:19:03.409 19:12:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:19:03.409 19:12:19 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:03.409 19:12:19 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:03.409 19:12:19 -- setup/devices.sh@53 -- # local found=0 00:19:03.409 19:12:19 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:03.409 19:12:19 -- setup/devices.sh@56 -- # : 00:19:03.409 19:12:19 -- setup/devices.sh@59 -- # local pci status 00:19:03.409 19:12:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:03.409 19:12:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:19:03.409 19:12:19 -- setup/devices.sh@47 -- # setup output config 00:19:03.409 19:12:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:03.409 19:12:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:03.668 19:12:19 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:03.668 19:12:19 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:19:03.668 19:12:19 -- setup/devices.sh@63 -- # found=1 00:19:03.668 19:12:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:03.668 19:12:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:03.668 19:12:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:03.668 19:12:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:03.668 19:12:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:04.605 19:12:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:04.605 19:12:20 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:19:04.605 19:12:20 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:04.605 19:12:20 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:04.605 19:12:20 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:04.605 19:12:20 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:04.605 19:12:20 -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:19:04.605 19:12:20 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:19:04.605 19:12:20 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:19:04.605 19:12:20 -- setup/devices.sh@50 -- # local mount_point= 00:19:04.605 19:12:20 -- setup/devices.sh@51 -- # local test_file= 00:19:04.605 19:12:20 -- setup/devices.sh@53 -- # local found=0 00:19:04.605 19:12:20 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:19:04.605 19:12:20 -- setup/devices.sh@59 -- # local pci status 00:19:04.605 19:12:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:04.605 19:12:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:19:04.605 19:12:20 -- setup/devices.sh@47 -- # setup output config 00:19:04.605 19:12:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:04.605 19:12:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:04.864 19:12:20 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:04.864 19:12:20 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:19:04.864 19:12:20 -- setup/devices.sh@63 -- # found=1 00:19:04.864 19:12:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:04.864 19:12:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:04.864 19:12:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:05.122 19:12:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:05.122 19:12:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:06.058 19:12:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:06.058 19:12:21 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:19:06.058 19:12:21 -- setup/devices.sh@68 -- # return 0 00:19:06.058 19:12:21 -- setup/devices.sh@128 -- # cleanup_nvme 00:19:06.058 19:12:21 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:06.058 19:12:21 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:06.058 19:12:21 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:19:06.058 19:12:21 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:19:06.058 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:19:06.058 00:19:06.058 real 0m6.520s 00:19:06.058 user 0m0.705s 00:19:06.058 sys 0m3.402s 00:19:06.058 19:12:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:06.058 19:12:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.058 ************************************ 00:19:06.058 END TEST nvme_mount 00:19:06.058 ************************************ 00:19:06.058 19:12:21 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:19:06.058 19:12:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:06.058 19:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:06.058 19:12:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.058 ************************************ 00:19:06.058 START TEST dm_mount 00:19:06.058 ************************************ 00:19:06.058 19:12:21 -- common/autotest_common.sh@1111 -- # dm_mount 00:19:06.058 19:12:21 -- setup/devices.sh@144 -- # pv=nvme0n1 00:19:06.058 19:12:21 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:19:06.058 19:12:21 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:19:06.058 19:12:21 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:19:06.058 19:12:21 -- setup/common.sh@39 -- # local disk=nvme0n1 00:19:06.058 19:12:21 -- setup/common.sh@40 -- # local part_no=2 00:19:06.058 19:12:21 -- setup/common.sh@41 -- # local size=1073741824 00:19:06.058 19:12:21 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:19:06.058 19:12:21 -- setup/common.sh@44 -- # parts=() 00:19:06.058 19:12:21 -- setup/common.sh@44 -- # local parts 00:19:06.058 19:12:21 -- setup/common.sh@46 -- # (( part = 1 )) 00:19:06.058 19:12:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:06.058 19:12:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:19:06.058 19:12:21 -- setup/common.sh@46 -- # (( part++ )) 00:19:06.058 19:12:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:06.058 19:12:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:19:06.058 19:12:21 -- setup/common.sh@46 -- # (( part++ )) 00:19:06.058 19:12:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:06.058 19:12:21 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:19:06.058 19:12:21 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:19:06.059 19:12:21 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:19:07.433 Creating new GPT entries in memory. 00:19:07.433 GPT data structures destroyed! You may now partition the disk using fdisk or 00:19:07.433 other utilities. 00:19:07.433 19:12:22 -- setup/common.sh@57 -- # (( part = 1 )) 00:19:07.433 19:12:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:07.433 19:12:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:19:07.433 19:12:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:19:07.433 19:12:22 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:19:08.366 Creating new GPT entries in memory. 00:19:08.366 The operation has completed successfully. 00:19:08.366 19:12:24 -- setup/common.sh@57 -- # (( part++ )) 00:19:08.366 19:12:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:08.366 19:12:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:19:08.366 19:12:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:19:08.366 19:12:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:19:09.301 The operation has completed successfully. 00:19:09.301 19:12:25 -- setup/common.sh@57 -- # (( part++ )) 00:19:09.301 19:12:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:09.301 19:12:25 -- setup/common.sh@62 -- # wait 103982 00:19:09.301 19:12:25 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:19:09.301 19:12:25 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:09.301 19:12:25 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:19:09.301 19:12:25 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:19:09.301 19:12:25 -- setup/devices.sh@160 -- # for t in {1..5} 00:19:09.301 19:12:25 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:19:09.301 19:12:25 -- setup/devices.sh@161 -- # break 00:19:09.301 19:12:25 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:19:09.302 19:12:25 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:19:09.302 19:12:25 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:19:09.302 19:12:25 -- setup/devices.sh@166 -- # dm=dm-0 00:19:09.302 19:12:25 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:19:09.302 19:12:25 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:19:09.302 19:12:25 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:09.302 19:12:25 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:19:09.302 19:12:25 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:09.302 19:12:25 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:19:09.302 19:12:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:19:09.302 19:12:25 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:09.302 19:12:25 -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:19:09.302 19:12:25 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:19:09.302 19:12:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:19:09.302 19:12:25 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:09.302 19:12:25 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:19:09.302 19:12:25 -- setup/devices.sh@53 -- # local found=0 00:19:09.302 19:12:25 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:19:09.302 19:12:25 -- setup/devices.sh@56 -- # : 00:19:09.302 19:12:25 -- setup/devices.sh@59 -- # local pci status 00:19:09.302 19:12:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.302 19:12:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:19:09.302 19:12:25 -- setup/devices.sh@47 -- # setup output config 00:19:09.302 19:12:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:09.302 19:12:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:09.560 19:12:25 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:09.561 19:12:25 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:19:09.561 19:12:25 -- setup/devices.sh@63 -- # found=1 00:19:09.561 19:12:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.561 19:12:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:09.561 19:12:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.819 19:12:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:09.819 19:12:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:10.753 19:12:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:10.753 19:12:26 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:19:10.753 19:12:26 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:10.753 19:12:26 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:19:10.753 19:12:26 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:19:10.753 19:12:26 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:10.753 19:12:26 -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:19:10.753 19:12:26 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:19:10.753 19:12:26 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:19:10.753 19:12:26 -- setup/devices.sh@50 -- # local mount_point= 00:19:10.753 19:12:26 -- setup/devices.sh@51 -- # local test_file= 00:19:10.753 19:12:26 -- setup/devices.sh@53 -- # local found=0 00:19:10.753 19:12:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:19:10.753 19:12:26 -- setup/devices.sh@59 -- # local pci status 00:19:10.753 19:12:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:10.753 19:12:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:19:10.753 19:12:26 -- setup/devices.sh@47 -- # setup output config 00:19:10.753 19:12:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:10.753 19:12:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:11.011 19:12:26 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:11.011 19:12:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:19:11.011 19:12:26 -- setup/devices.sh@63 -- # found=1 00:19:11.011 19:12:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:11.011 19:12:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:11.011 19:12:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:11.011 19:12:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:19:11.011 19:12:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:11.945 19:12:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:11.945 19:12:27 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:19:11.945 19:12:27 -- setup/devices.sh@68 -- # return 0 00:19:11.945 19:12:27 -- setup/devices.sh@187 -- # cleanup_dm 00:19:11.945 19:12:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:11.945 19:12:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:19:11.945 19:12:27 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:19:11.945 19:12:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:11.945 19:12:27 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:19:12.204 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:19:12.204 19:12:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:19:12.204 19:12:27 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:19:12.204 ************************************ 00:19:12.204 END TEST dm_mount 00:19:12.204 ************************************ 00:19:12.204 00:19:12.204 real 0m5.981s 00:19:12.204 user 0m0.531s 00:19:12.204 sys 0m2.211s 00:19:12.204 19:12:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:12.204 19:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:12.204 19:12:27 -- setup/devices.sh@1 -- # cleanup 00:19:12.204 19:12:27 -- setup/devices.sh@11 -- # cleanup_nvme 00:19:12.204 19:12:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:12.204 19:12:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:12.204 19:12:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:19:12.204 19:12:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:19:12.204 19:12:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:19:12.204 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:19:12.204 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:19:12.204 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:19:12.204 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:19:12.204 19:12:28 -- setup/devices.sh@12 -- # cleanup_dm 00:19:12.204 19:12:28 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:12.204 19:12:28 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:19:12.204 19:12:28 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:12.204 19:12:28 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:19:12.204 19:12:28 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:19:12.204 19:12:28 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:19:12.204 00:19:12.204 real 0m13.523s 00:19:12.204 user 0m1.744s 00:19:12.204 sys 0m6.085s 00:19:12.204 19:12:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:12.204 19:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:12.204 ************************************ 00:19:12.204 END TEST devices 00:19:12.204 ************************************ 00:19:12.204 ************************************ 00:19:12.204 END TEST setup.sh 00:19:12.204 ************************************ 00:19:12.204 00:19:12.204 real 0m29.108s 00:19:12.204 user 0m6.686s 00:19:12.204 sys 0m16.456s 00:19:12.204 19:12:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:12.204 19:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:12.204 19:12:28 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:19:12.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:19:12.772 Hugepages 00:19:12.772 node hugesize free / total 00:19:12.772 node0 1048576kB 0 / 0 00:19:12.772 node0 2048kB 2048 / 2048 00:19:12.772 00:19:12.772 Type BDF Vendor Device NUMA Driver Device Block devices 00:19:12.772 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:19:13.030 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:19:13.030 19:12:28 -- spdk/autotest.sh@130 -- # uname -s 00:19:13.030 19:12:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:19:13.030 19:12:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:19:13.030 19:12:28 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:13.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:19:13.547 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:14.483 19:12:30 -- common/autotest_common.sh@1518 -- # sleep 1 00:19:15.417 19:12:31 -- common/autotest_common.sh@1519 -- # bdfs=() 00:19:15.417 19:12:31 -- common/autotest_common.sh@1519 -- # local bdfs 00:19:15.417 19:12:31 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:19:15.417 19:12:31 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:19:15.417 19:12:31 -- common/autotest_common.sh@1499 -- # bdfs=() 00:19:15.417 19:12:31 -- common/autotest_common.sh@1499 -- # local bdfs 00:19:15.417 19:12:31 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:15.417 19:12:31 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:15.417 19:12:31 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:19:15.675 19:12:31 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:19:15.675 19:12:31 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:19:15.675 19:12:31 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:15.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:19:15.933 Waiting for block devices as requested 00:19:15.933 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:15.933 19:12:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:19:15.933 19:12:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:19:15.933 19:12:31 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:19:15.933 19:12:31 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:19:15.933 19:12:31 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:19:15.933 19:12:31 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:19:15.933 19:12:31 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:19:15.933 19:12:31 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:19:15.933 19:12:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:19:15.933 19:12:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:19:15.933 19:12:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:19:15.933 19:12:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:19:15.933 19:12:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:19:16.191 19:12:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:19:16.191 19:12:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:19:16.191 19:12:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:19:16.191 19:12:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:19:16.191 19:12:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:19:16.191 19:12:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:19:16.191 19:12:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:19:16.191 19:12:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:19:16.191 19:12:31 -- common/autotest_common.sh@1543 -- # continue 00:19:16.191 19:12:31 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:19:16.191 19:12:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:16.191 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:19:16.191 19:12:31 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:19:16.191 19:12:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:16.191 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:19:16.191 19:12:31 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:16.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:19:16.707 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:17.642 19:12:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:19:17.642 19:12:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:17.642 19:12:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.642 19:12:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:19:17.642 19:12:33 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:19:17.642 19:12:33 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:19:17.642 19:12:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:19:17.642 19:12:33 -- common/autotest_common.sh@1563 -- # local bdfs 00:19:17.642 19:12:33 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:19:17.642 19:12:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:19:17.642 19:12:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:19:17.642 19:12:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:17.642 19:12:33 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:17.642 19:12:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:19:17.642 19:12:33 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:19:17.642 19:12:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:19:17.642 19:12:33 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:19:17.642 19:12:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:19:17.642 19:12:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:19:17.642 19:12:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:19:17.642 19:12:33 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:19:17.642 19:12:33 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:19:17.642 19:12:33 -- common/autotest_common.sh@1579 -- # return 0 00:19:17.642 19:12:33 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:19:17.642 19:12:33 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:19:17.642 19:12:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:17.642 19:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:17.642 19:12:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.901 ************************************ 00:19:17.901 START TEST unittest 00:19:17.901 ************************************ 00:19:17.901 19:12:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:19:17.901 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:19:17.901 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:19:17.901 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:19:17.901 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:19:17.901 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:19:17.901 + rootdir=/home/vagrant/spdk_repo/spdk 00:19:17.901 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:19:17.901 ++ rpc_py=rpc_cmd 00:19:17.901 ++ set -e 00:19:17.901 ++ shopt -s nullglob 00:19:17.901 ++ shopt -s extglob 00:19:17.901 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:19:17.901 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:19:17.901 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:19:17.901 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:19:17.901 +++ CONFIG_FIO_PLUGIN=y 00:19:17.901 +++ CONFIG_NVME_CUSE=y 00:19:17.901 +++ CONFIG_RAID5F=y 00:19:17.901 +++ CONFIG_LTO=n 00:19:17.901 +++ CONFIG_SMA=n 00:19:17.901 +++ CONFIG_ISAL=y 00:19:17.901 +++ CONFIG_OPENSSL_PATH= 00:19:17.901 +++ CONFIG_IDXD_KERNEL=n 00:19:17.901 +++ CONFIG_URING_PATH= 00:19:17.901 +++ CONFIG_DAOS=n 00:19:17.901 +++ CONFIG_DPDK_LIB_DIR= 00:19:17.901 +++ CONFIG_OCF=n 00:19:17.901 +++ CONFIG_EXAMPLES=y 00:19:17.901 +++ CONFIG_RDMA_PROV=verbs 00:19:17.901 +++ CONFIG_ISCSI_INITIATOR=y 00:19:17.901 +++ CONFIG_VTUNE=n 00:19:17.901 +++ CONFIG_DPDK_INC_DIR= 00:19:17.901 +++ CONFIG_CET=n 00:19:17.901 +++ CONFIG_TESTS=y 00:19:17.901 +++ CONFIG_APPS=y 00:19:17.901 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:19:17.901 +++ CONFIG_DAOS_DIR= 00:19:17.901 +++ CONFIG_CRYPTO_MLX5=n 00:19:17.901 +++ CONFIG_XNVME=n 00:19:17.901 +++ CONFIG_UNIT_TESTS=y 00:19:17.901 +++ CONFIG_FUSE=n 00:19:17.901 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:19:17.901 +++ CONFIG_OCF_PATH= 00:19:17.901 +++ CONFIG_WPDK_DIR= 00:19:17.901 +++ CONFIG_VFIO_USER=n 00:19:17.901 +++ CONFIG_MAX_LCORES= 00:19:17.901 +++ CONFIG_ARCH=native 00:19:17.901 +++ CONFIG_TSAN=n 00:19:17.901 +++ CONFIG_VIRTIO=y 00:19:17.901 +++ CONFIG_HAVE_EVP_MAC=n 00:19:17.901 +++ CONFIG_IPSEC_MB=n 00:19:17.901 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:19:17.901 +++ CONFIG_ASAN=y 00:19:17.901 +++ CONFIG_SHARED=n 00:19:17.901 +++ CONFIG_VTUNE_DIR= 00:19:17.901 +++ CONFIG_RDMA_SET_TOS=y 00:19:17.901 +++ CONFIG_VBDEV_COMPRESS=n 00:19:17.901 +++ CONFIG_VFIO_USER_DIR= 00:19:17.901 +++ CONFIG_PGO_DIR= 00:19:17.901 +++ CONFIG_FUZZER_LIB= 00:19:17.901 +++ CONFIG_HAVE_EXECINFO_H=y 00:19:17.901 +++ CONFIG_USDT=n 00:19:17.901 +++ CONFIG_HAVE_KEYUTILS=y 00:19:17.901 +++ CONFIG_URING_ZNS=n 00:19:17.901 +++ CONFIG_FC_PATH= 00:19:17.901 +++ CONFIG_COVERAGE=y 00:19:17.901 +++ CONFIG_CUSTOMOCF=n 00:19:17.901 +++ CONFIG_DPDK_PKG_CONFIG=n 00:19:17.901 +++ CONFIG_WERROR=y 00:19:17.901 +++ CONFIG_DEBUG=y 00:19:17.901 +++ CONFIG_RDMA=y 00:19:17.901 +++ CONFIG_HAVE_ARC4RANDOM=n 00:19:17.901 +++ CONFIG_FUZZER=n 00:19:17.901 +++ CONFIG_FC=n 00:19:17.901 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:19:17.901 +++ CONFIG_HAVE_LIBARCHIVE=n 00:19:17.901 +++ CONFIG_DPDK_COMPRESSDEV=n 00:19:17.901 +++ CONFIG_CROSS_PREFIX= 00:19:17.901 +++ CONFIG_PREFIX=/usr/local 00:19:17.901 +++ CONFIG_HAVE_LIBBSD=n 00:19:17.901 +++ CONFIG_UBSAN=y 00:19:17.901 +++ CONFIG_PGO_CAPTURE=n 00:19:17.901 +++ CONFIG_UBLK=n 00:19:17.901 +++ CONFIG_ISAL_CRYPTO=y 00:19:17.901 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:19:17.901 +++ CONFIG_CRYPTO=n 00:19:17.901 +++ CONFIG_RBD=n 00:19:17.901 +++ CONFIG_LIBDIR= 00:19:17.901 +++ CONFIG_IPSEC_MB_DIR= 00:19:17.901 +++ CONFIG_PGO_USE=n 00:19:17.901 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:17.901 +++ CONFIG_GOLANG=n 00:19:17.901 +++ CONFIG_VHOST=y 00:19:17.901 +++ CONFIG_IDXD=y 00:19:17.901 +++ CONFIG_AVAHI=n 00:19:17.901 +++ CONFIG_URING=n 00:19:17.901 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:17.901 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:17.901 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:19:17.901 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:19:17.901 +++ _root=/home/vagrant/spdk_repo/spdk 00:19:17.901 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:19:17.901 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:19:17.901 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:19:17.901 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:19:17.901 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:19:17.901 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:19:17.901 +++ VHOST_APP=("$_app_dir/vhost") 00:19:17.901 +++ DD_APP=("$_app_dir/spdk_dd") 00:19:17.901 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:19:17.901 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:19:17.901 +++ [[ #ifndef SPDK_CONFIG_H 00:19:17.901 #define SPDK_CONFIG_H 00:19:17.901 #define SPDK_CONFIG_APPS 1 00:19:17.901 #define SPDK_CONFIG_ARCH native 00:19:17.901 #define SPDK_CONFIG_ASAN 1 00:19:17.901 #undef SPDK_CONFIG_AVAHI 00:19:17.901 #undef SPDK_CONFIG_CET 00:19:17.901 #define SPDK_CONFIG_COVERAGE 1 00:19:17.901 #define SPDK_CONFIG_CROSS_PREFIX 00:19:17.901 #undef SPDK_CONFIG_CRYPTO 00:19:17.901 #undef SPDK_CONFIG_CRYPTO_MLX5 00:19:17.901 #undef SPDK_CONFIG_CUSTOMOCF 00:19:17.901 #undef SPDK_CONFIG_DAOS 00:19:17.901 #define SPDK_CONFIG_DAOS_DIR 00:19:17.901 #define SPDK_CONFIG_DEBUG 1 00:19:17.901 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:19:17.901 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:17.901 #define SPDK_CONFIG_DPDK_INC_DIR 00:19:17.901 #define SPDK_CONFIG_DPDK_LIB_DIR 00:19:17.901 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:19:17.901 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:17.901 #define SPDK_CONFIG_EXAMPLES 1 00:19:17.901 #undef SPDK_CONFIG_FC 00:19:17.901 #define SPDK_CONFIG_FC_PATH 00:19:17.901 #define SPDK_CONFIG_FIO_PLUGIN 1 00:19:17.902 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:19:17.902 #undef SPDK_CONFIG_FUSE 00:19:17.902 #undef SPDK_CONFIG_FUZZER 00:19:17.902 #define SPDK_CONFIG_FUZZER_LIB 00:19:17.902 #undef SPDK_CONFIG_GOLANG 00:19:17.902 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:19:17.902 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:19:17.902 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:19:17.902 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:19:17.902 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:19:17.902 #undef SPDK_CONFIG_HAVE_LIBBSD 00:19:17.902 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:19:17.902 #define SPDK_CONFIG_IDXD 1 00:19:17.902 #undef SPDK_CONFIG_IDXD_KERNEL 00:19:17.902 #undef SPDK_CONFIG_IPSEC_MB 00:19:17.902 #define SPDK_CONFIG_IPSEC_MB_DIR 00:19:17.902 #define SPDK_CONFIG_ISAL 1 00:19:17.902 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:19:17.902 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:19:17.902 #define SPDK_CONFIG_LIBDIR 00:19:17.902 #undef SPDK_CONFIG_LTO 00:19:17.902 #define SPDK_CONFIG_MAX_LCORES 00:19:17.902 #define SPDK_CONFIG_NVME_CUSE 1 00:19:17.902 #undef SPDK_CONFIG_OCF 00:19:17.902 #define SPDK_CONFIG_OCF_PATH 00:19:17.902 #define SPDK_CONFIG_OPENSSL_PATH 00:19:17.902 #undef SPDK_CONFIG_PGO_CAPTURE 00:19:17.902 #define SPDK_CONFIG_PGO_DIR 00:19:17.902 #undef SPDK_CONFIG_PGO_USE 00:19:17.902 #define SPDK_CONFIG_PREFIX /usr/local 00:19:17.902 #define SPDK_CONFIG_RAID5F 1 00:19:17.902 #undef SPDK_CONFIG_RBD 00:19:17.902 #define SPDK_CONFIG_RDMA 1 00:19:17.902 #define SPDK_CONFIG_RDMA_PROV verbs 00:19:17.902 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:19:17.902 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:19:17.902 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:19:17.902 #undef SPDK_CONFIG_SHARED 00:19:17.902 #undef SPDK_CONFIG_SMA 00:19:17.902 #define SPDK_CONFIG_TESTS 1 00:19:17.902 #undef SPDK_CONFIG_TSAN 00:19:17.902 #undef SPDK_CONFIG_UBLK 00:19:17.902 #define SPDK_CONFIG_UBSAN 1 00:19:17.902 #define SPDK_CONFIG_UNIT_TESTS 1 00:19:17.902 #undef SPDK_CONFIG_URING 00:19:17.902 #define SPDK_CONFIG_URING_PATH 00:19:17.902 #undef SPDK_CONFIG_URING_ZNS 00:19:17.902 #undef SPDK_CONFIG_USDT 00:19:17.902 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:19:17.902 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:19:17.902 #undef SPDK_CONFIG_VFIO_USER 00:19:17.902 #define SPDK_CONFIG_VFIO_USER_DIR 00:19:17.902 #define SPDK_CONFIG_VHOST 1 00:19:17.902 #define SPDK_CONFIG_VIRTIO 1 00:19:17.902 #undef SPDK_CONFIG_VTUNE 00:19:17.902 #define SPDK_CONFIG_VTUNE_DIR 00:19:17.902 #define SPDK_CONFIG_WERROR 1 00:19:17.902 #define SPDK_CONFIG_WPDK_DIR 00:19:17.902 #undef SPDK_CONFIG_XNVME 00:19:17.902 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:19:17.902 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:19:17.902 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.902 +++ [[ -e /bin/wpdk_common.sh ]] 00:19:17.902 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.902 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.902 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:17.902 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:17.902 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:17.902 ++++ export PATH 00:19:17.902 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:17.902 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:17.902 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:17.902 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:17.902 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:17.902 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:19:17.902 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:19:17.902 +++ TEST_TAG=N/A 00:19:17.902 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:19:17.902 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:19:17.902 ++++ uname -s 00:19:17.902 +++ PM_OS=Linux 00:19:17.902 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:19:17.902 +++ [[ Linux == FreeBSD ]] 00:19:17.902 +++ [[ Linux == Linux ]] 00:19:17.902 +++ [[ QEMU != QEMU ]] 00:19:17.902 +++ MONITOR_RESOURCES_PIDS=() 00:19:17.902 +++ declare -A MONITOR_RESOURCES_PIDS 00:19:17.902 +++ mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:19:17.902 ++ : 0 00:19:17.902 ++ export RUN_NIGHTLY 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_RUN_VALGRIND 00:19:17.902 ++ : 1 00:19:17.902 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:19:17.902 ++ : 1 00:19:17.902 ++ export SPDK_TEST_UNITTEST 00:19:17.902 ++ : 00:19:17.902 ++ export SPDK_TEST_AUTOBUILD 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_RELEASE_BUILD 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_ISAL 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_ISCSI 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_ISCSI_INITIATOR 00:19:17.902 ++ : 1 00:19:17.902 ++ export SPDK_TEST_NVME 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_NVME_PMR 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_NVME_BP 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_NVME_CLI 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_NVME_CUSE 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_NVME_FDP 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_NVMF 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_VFIOUSER 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_VFIOUSER_QEMU 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_FUZZER 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_FUZZER_SHORT 00:19:17.902 ++ : rdma 00:19:17.902 ++ export SPDK_TEST_NVMF_TRANSPORT 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_RBD 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_VHOST 00:19:17.902 ++ : 1 00:19:17.902 ++ export SPDK_TEST_BLOCKDEV 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_IOAT 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_BLOBFS 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_VHOST_INIT 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_LVOL 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_VBDEV_COMPRESS 00:19:17.902 ++ : 1 00:19:17.902 ++ export SPDK_RUN_ASAN 00:19:17.902 ++ : 1 00:19:17.902 ++ export SPDK_RUN_UBSAN 00:19:17.902 ++ : 00:19:17.902 ++ export SPDK_RUN_EXTERNAL_DPDK 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_RUN_NON_ROOT 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_CRYPTO 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_FTL 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_OCF 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_VMD 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_OPAL 00:19:17.902 ++ : 00:19:17.902 ++ export SPDK_TEST_NATIVE_DPDK 00:19:17.902 ++ : true 00:19:17.902 ++ export SPDK_AUTOTEST_X 00:19:17.902 ++ : 1 00:19:17.902 ++ export SPDK_TEST_RAID5 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_URING 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_USDT 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_USE_IGB_UIO 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_SCHEDULER 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_SCANBUILD 00:19:17.902 ++ : 00:19:17.902 ++ export SPDK_TEST_NVMF_NICS 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_SMA 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_DAOS 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_XNVME 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_ACCEL_DSA 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_ACCEL_IAA 00:19:17.902 ++ : 00:19:17.902 ++ export SPDK_TEST_FUZZER_TARGET 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_TEST_NVMF_MDNS 00:19:17.902 ++ : 0 00:19:17.902 ++ export SPDK_JSONRPC_GO_CLIENT 00:19:17.902 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:17.902 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:17.902 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:17.902 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:17.902 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:17.902 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:17.902 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:17.902 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:17.902 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:19:17.903 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:19:17.903 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:17.903 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:17.903 ++ export PYTHONDONTWRITEBYTECODE=1 00:19:17.903 ++ PYTHONDONTWRITEBYTECODE=1 00:19:17.903 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:17.903 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:17.903 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:17.903 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:17.903 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:19:17.903 ++ rm -rf /var/tmp/asan_suppression_file 00:19:17.903 ++ cat 00:19:17.903 ++ echo leak:libfuse3.so 00:19:17.903 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:17.903 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:17.903 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:17.903 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:17.903 ++ '[' -z /var/spdk/dependencies ']' 00:19:17.903 ++ export DEPENDENCY_DIR 00:19:17.903 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:17.903 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:17.903 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:17.903 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:17.903 ++ export QEMU_BIN= 00:19:17.903 ++ QEMU_BIN= 00:19:17.903 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:19:17.903 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:19:17.903 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:17.903 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:17.903 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:17.903 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:17.903 ++ '[' 0 -eq 0 ']' 00:19:17.903 ++ export valgrind= 00:19:17.903 ++ valgrind= 00:19:17.903 +++ uname -s 00:19:17.903 ++ '[' Linux = Linux ']' 00:19:17.903 ++ HUGEMEM=4096 00:19:17.903 ++ export CLEAR_HUGE=yes 00:19:17.903 ++ CLEAR_HUGE=yes 00:19:17.903 ++ [[ 0 -eq 1 ]] 00:19:17.903 ++ [[ 0 -eq 1 ]] 00:19:17.903 ++ MAKE=make 00:19:17.903 +++ nproc 00:19:17.903 ++ MAKEFLAGS=-j10 00:19:17.903 ++ export HUGEMEM=4096 00:19:17.903 ++ HUGEMEM=4096 00:19:17.903 ++ NO_HUGE=() 00:19:17.903 ++ TEST_MODE= 00:19:17.903 ++ [[ -z '' ]] 00:19:17.903 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:19:17.903 ++ exec 00:19:17.903 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:19:17.903 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:19:17.903 ++ set_test_storage 2147483648 00:19:17.903 ++ [[ -v testdir ]] 00:19:17.903 ++ local requested_size=2147483648 00:19:17.903 ++ local mount target_dir 00:19:17.903 ++ local -A mounts fss sizes avails uses 00:19:17.903 ++ local source fs size avail mount use 00:19:17.903 ++ local storage_fallback storage_candidates 00:19:17.903 +++ mktemp -udt spdk.XXXXXX 00:19:17.903 ++ storage_fallback=/tmp/spdk.gP6pJh 00:19:17.903 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:19:17.903 ++ [[ -n '' ]] 00:19:17.903 ++ [[ -n '' ]] 00:19:17.903 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.gP6pJh/tests/unit /tmp/spdk.gP6pJh 00:19:17.903 ++ requested_size=2214592512 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 +++ df -T 00:19:17.903 +++ grep -v Filesystem 00:19:17.903 ++ mounts["$mount"]=udev 00:19:17.903 ++ fss["$mount"]=devtmpfs 00:19:17.903 ++ avails["$mount"]=6224465920 00:19:17.903 ++ sizes["$mount"]=6224465920 00:19:17.903 ++ uses["$mount"]=0 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=tmpfs 00:19:17.903 ++ fss["$mount"]=tmpfs 00:19:17.903 ++ avails["$mount"]=1253396480 00:19:17.903 ++ sizes["$mount"]=1254514688 00:19:17.903 ++ uses["$mount"]=1118208 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=/dev/vda1 00:19:17.903 ++ fss["$mount"]=ext4 00:19:17.903 ++ avails["$mount"]=10305458176 00:19:17.903 ++ sizes["$mount"]=20616794112 00:19:17.903 ++ uses["$mount"]=10294558720 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=tmpfs 00:19:17.903 ++ fss["$mount"]=tmpfs 00:19:17.903 ++ avails["$mount"]=6272565248 00:19:17.903 ++ sizes["$mount"]=6272565248 00:19:17.903 ++ uses["$mount"]=0 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=tmpfs 00:19:17.903 ++ fss["$mount"]=tmpfs 00:19:17.903 ++ avails["$mount"]=5242880 00:19:17.903 ++ sizes["$mount"]=5242880 00:19:17.903 ++ uses["$mount"]=0 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=tmpfs 00:19:17.903 ++ fss["$mount"]=tmpfs 00:19:17.903 ++ avails["$mount"]=6272565248 00:19:17.903 ++ sizes["$mount"]=6272565248 00:19:17.903 ++ uses["$mount"]=0 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=/dev/loop0 00:19:17.903 ++ fss["$mount"]=squashfs 00:19:17.903 ++ avails["$mount"]=0 00:19:17.903 ++ sizes["$mount"]=67108864 00:19:17.903 ++ uses["$mount"]=67108864 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=/dev/vda15 00:19:17.903 ++ fss["$mount"]=vfat 00:19:17.903 ++ avails["$mount"]=103089152 00:19:17.903 ++ sizes["$mount"]=109422592 00:19:17.903 ++ uses["$mount"]=6334464 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=/dev/loop2 00:19:17.903 ++ fss["$mount"]=squashfs 00:19:17.903 ++ avails["$mount"]=0 00:19:17.903 ++ sizes["$mount"]=41025536 00:19:17.903 ++ uses["$mount"]=41025536 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=/dev/loop1 00:19:17.903 ++ fss["$mount"]=squashfs 00:19:17.903 ++ avails["$mount"]=0 00:19:17.903 ++ sizes["$mount"]=96337920 00:19:17.903 ++ uses["$mount"]=96337920 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=tmpfs 00:19:17.903 ++ fss["$mount"]=tmpfs 00:19:17.903 ++ avails["$mount"]=1254510592 00:19:17.903 ++ sizes["$mount"]=1254510592 00:19:17.903 ++ uses["$mount"]=0 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt/output 00:19:17.903 ++ fss["$mount"]=fuse.sshfs 00:19:17.903 ++ avails["$mount"]=91317284864 00:19:17.903 ++ sizes["$mount"]=105088212992 00:19:17.903 ++ uses["$mount"]=8385495040 00:19:17.903 ++ read -r source fs size use avail _ mount 00:19:17.903 ++ printf '* Looking for test storage...\n' 00:19:17.903 * Looking for test storage... 00:19:17.903 ++ local target_space new_size 00:19:17.903 ++ for target_dir in "${storage_candidates[@]}" 00:19:17.903 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:19:17.903 +++ awk '$1 !~ /Filesystem/{print $6}' 00:19:17.903 ++ mount=/ 00:19:17.903 ++ target_space=10305458176 00:19:17.903 ++ (( target_space == 0 || target_space < requested_size )) 00:19:17.903 ++ (( target_space >= requested_size )) 00:19:17.903 ++ [[ ext4 == tmpfs ]] 00:19:17.903 ++ [[ ext4 == ramfs ]] 00:19:17.903 ++ [[ / == / ]] 00:19:17.903 ++ new_size=12509151232 00:19:17.903 ++ (( new_size * 100 / sizes[/] > 95 )) 00:19:17.903 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:19:17.903 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:19:17.903 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:19:17.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:19:17.903 ++ return 0 00:19:17.903 ++ set -o errtrace 00:19:17.903 ++ shopt -s extdebug 00:19:17.903 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:19:17.903 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:19:17.903 19:12:33 -- common/autotest_common.sh@1673 -- # true 00:19:17.903 19:12:33 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:19:17.903 19:12:33 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:19:17.903 19:12:33 -- common/autotest_common.sh@29 -- # exec 00:19:17.903 19:12:33 -- common/autotest_common.sh@31 -- # xtrace_restore 00:19:17.903 19:12:33 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:19:17.903 19:12:33 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:19:17.903 19:12:33 -- common/autotest_common.sh@18 -- # set -x 00:19:17.903 19:12:33 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:19:17.903 19:12:33 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:19:17.903 19:12:33 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:19:17.903 19:12:33 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:19:17.903 19:12:33 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:19:17.903 19:12:33 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:19:17.903 19:12:33 -- unit/unittest.sh@179 -- # hash lcov 00:19:17.903 19:12:33 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:19:17.903 19:12:33 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:19:17.903 19:12:33 -- unit/unittest.sh@180 -- # cov_avail=yes 00:19:17.903 19:12:33 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:19:17.903 19:12:33 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:19:17.903 19:12:33 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:19:17.903 19:12:33 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:19:17.903 19:12:33 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:19:17.903 --rc lcov_branch_coverage=1 00:19:17.903 --rc lcov_function_coverage=1 00:19:17.903 --rc genhtml_branch_coverage=1 00:19:17.903 --rc genhtml_function_coverage=1 00:19:17.903 --rc genhtml_legend=1 00:19:17.903 --rc geninfo_all_blocks=1 00:19:17.903 ' 00:19:17.903 19:12:33 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:19:17.903 --rc lcov_branch_coverage=1 00:19:17.903 --rc lcov_function_coverage=1 00:19:17.903 --rc genhtml_branch_coverage=1 00:19:17.903 --rc genhtml_function_coverage=1 00:19:17.903 --rc genhtml_legend=1 00:19:17.903 --rc geninfo_all_blocks=1 00:19:17.904 ' 00:19:17.904 19:12:33 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:19:17.904 --rc lcov_branch_coverage=1 00:19:17.904 --rc lcov_function_coverage=1 00:19:17.904 --rc genhtml_branch_coverage=1 00:19:17.904 --rc genhtml_function_coverage=1 00:19:17.904 --rc genhtml_legend=1 00:19:17.904 --rc geninfo_all_blocks=1 00:19:17.904 --no-external' 00:19:17.904 19:12:33 -- unit/unittest.sh@200 -- # LCOV='lcov 00:19:17.904 --rc lcov_branch_coverage=1 00:19:17.904 --rc lcov_function_coverage=1 00:19:17.904 --rc genhtml_branch_coverage=1 00:19:17.904 --rc genhtml_function_coverage=1 00:19:17.904 --rc genhtml_legend=1 00:19:17.904 --rc geninfo_all_blocks=1 00:19:17.904 --no-external' 00:19:17.904 19:12:33 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:19:19.805 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:19:19.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:19:19.805 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:19:19.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:19:19.805 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:19:19.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:19:19.805 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:19:19.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:19:19.805 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:19:19.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:19:19.805 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:19:19.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:19:19.805 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:19:19.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:19:19.805 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:19:19.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:19:19.805 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:19:19.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:19:20.064 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:19:20.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:19:20.065 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:19:20.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:19:20.325 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:19:20.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:19:20.326 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:19:20.326 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:19:20.326 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:19:20.326 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:19:20.326 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:19:20.326 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:19:20.326 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:19:20.326 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:19:20.326 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:19:20.326 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:19:20.326 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:19:20.326 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:20:16.620 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:20:16.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:20:16.620 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:20:16.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:20:16.620 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:20:16.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:20:16.620 19:13:31 -- unit/unittest.sh@206 -- # uname -m 00:20:16.620 19:13:31 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:20:16.620 19:13:31 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:20:16.620 19:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:16.620 19:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.620 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:20:16.620 ************************************ 00:20:16.620 START TEST unittest_pci_event 00:20:16.620 ************************************ 00:20:16.620 19:13:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:20:16.620 00:20:16.620 00:20:16.620 CUnit - A unit testing framework for C - Version 2.1-3 00:20:16.620 http://cunit.sourceforge.net/ 00:20:16.620 00:20:16.620 00:20:16.620 Suite: pci_event 00:20:16.620 Test: test_pci_parse_event ...[2024-04-18 19:13:31.566171] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:20:16.620 [2024-04-18 19:13:31.566784] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:20:16.620 passed 00:20:16.620 00:20:16.620 Run Summary: Type Total Ran Passed Failed Inactive 00:20:16.620 suites 1 1 n/a 0 0 00:20:16.620 tests 1 1 1 0 0 00:20:16.620 asserts 15 15 15 0 n/a 00:20:16.620 00:20:16.620 Elapsed time = 0.001 seconds 00:20:16.620 00:20:16.620 real 0m0.044s 00:20:16.620 user 0m0.030s 00:20:16.620 sys 0m0.010s 00:20:16.620 19:13:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:16.620 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:20:16.620 ************************************ 00:20:16.620 END TEST unittest_pci_event 00:20:16.620 ************************************ 00:20:16.620 19:13:31 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:20:16.620 19:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:16.620 19:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.620 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:20:16.620 ************************************ 00:20:16.620 START TEST unittest_include 00:20:16.620 ************************************ 00:20:16.620 19:13:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:20:16.620 00:20:16.620 00:20:16.620 CUnit - A unit testing framework for C - Version 2.1-3 00:20:16.620 http://cunit.sourceforge.net/ 00:20:16.620 00:20:16.620 00:20:16.620 Suite: histogram 00:20:16.620 Test: histogram_test ...passed 00:20:16.620 Test: histogram_merge ...passed 00:20:16.620 00:20:16.620 Run Summary: Type Total Ran Passed Failed Inactive 00:20:16.620 suites 1 1 n/a 0 0 00:20:16.620 tests 2 2 2 0 0 00:20:16.620 asserts 50 50 50 0 n/a 00:20:16.620 00:20:16.620 Elapsed time = 0.005 seconds 00:20:16.620 00:20:16.620 real 0m0.035s 00:20:16.620 user 0m0.015s 00:20:16.620 sys 0m0.020s 00:20:16.620 19:13:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:16.620 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:20:16.620 ************************************ 00:20:16.620 END TEST unittest_include 00:20:16.620 ************************************ 00:20:16.620 19:13:31 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:20:16.620 19:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:16.620 19:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.620 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:20:16.620 ************************************ 00:20:16.620 START TEST unittest_bdev 00:20:16.620 ************************************ 00:20:16.620 19:13:31 -- common/autotest_common.sh@1111 -- # unittest_bdev 00:20:16.620 19:13:31 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:20:16.620 00:20:16.620 00:20:16.620 CUnit - A unit testing framework for C - Version 2.1-3 00:20:16.620 http://cunit.sourceforge.net/ 00:20:16.620 00:20:16.620 00:20:16.620 Suite: bdev 00:20:16.620 Test: bytes_to_blocks_test ...passed 00:20:16.620 Test: num_blocks_test ...passed 00:20:16.620 Test: io_valid_test ...passed 00:20:16.620 Test: open_write_test ...[2024-04-18 19:13:31.899718] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7987:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:20:16.620 [2024-04-18 19:13:31.900188] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7987:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:20:16.620 [2024-04-18 19:13:31.900387] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7987:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:20:16.620 passed 00:20:16.620 Test: claim_test ...passed 00:20:16.620 Test: alias_add_del_test ...[2024-04-18 19:13:32.005210] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4547:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:20:16.620 [2024-04-18 19:13:32.005522] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4577:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:20:16.620 [2024-04-18 19:13:32.005693] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4547:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:20:16.620 passed 00:20:16.620 Test: get_device_stat_test ...passed 00:20:16.620 Test: bdev_io_types_test ...passed 00:20:16.620 Test: bdev_io_wait_test ...passed 00:20:16.620 Test: bdev_io_spans_split_test ...passed 00:20:16.620 Test: bdev_io_boundary_split_test ...passed 00:20:16.620 Test: bdev_io_max_size_and_segment_split_test ...[2024-04-18 19:13:32.215499] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3184:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:20:16.620 passed 00:20:16.620 Test: bdev_io_mix_split_test ...passed 00:20:16.620 Test: bdev_io_split_with_io_wait ...passed 00:20:16.620 Test: bdev_io_write_unit_split_test ...[2024-04-18 19:13:32.372520] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2739:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:20:16.620 [2024-04-18 19:13:32.372806] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2739:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:20:16.620 [2024-04-18 19:13:32.372872] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2739:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:20:16.620 [2024-04-18 19:13:32.373008] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2739:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:20:16.620 passed 00:20:16.620 Test: bdev_io_alignment_with_boundary ...passed 00:20:16.620 Test: bdev_io_alignment ...passed 00:20:16.879 Test: bdev_histograms ...passed 00:20:16.879 Test: bdev_write_zeroes ...passed 00:20:16.879 Test: bdev_compare_and_write ...passed 00:20:16.879 Test: bdev_compare ...passed 00:20:17.138 Test: bdev_compare_emulated ...passed 00:20:17.138 Test: bdev_zcopy_write ...passed 00:20:17.138 Test: bdev_zcopy_read ...passed 00:20:17.138 Test: bdev_open_while_hotremove ...passed 00:20:17.138 Test: bdev_close_while_hotremove ...passed 00:20:17.138 Test: bdev_open_ext_test ...[2024-04-18 19:13:32.979932] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8093:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:20:17.138 passed 00:20:17.138 Test: bdev_open_ext_unregister ...[2024-04-18 19:13:32.980343] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8093:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:20:17.138 passed 00:20:17.138 Test: bdev_set_io_timeout ...passed 00:20:17.396 Test: bdev_set_qd_sampling ...passed 00:20:17.396 Test: lba_range_overlap ...passed 00:20:17.396 Test: lock_lba_range_check_ranges ...passed 00:20:17.396 Test: lock_lba_range_with_io_outstanding ...passed 00:20:17.396 Test: lock_lba_range_overlapped ...passed 00:20:17.396 Test: bdev_quiesce ...[2024-04-18 19:13:33.267153] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10016:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:20:17.396 passed 00:20:17.655 Test: bdev_io_abort ...passed 00:20:17.655 Test: bdev_unmap ...passed 00:20:17.655 Test: bdev_write_zeroes_split_test ...passed 00:20:17.655 Test: bdev_set_options_test ...passed 00:20:17.655 Test: bdev_get_memory_domains ...passed[2024-04-18 19:13:33.449596] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 482:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:20:17.655 00:20:17.655 Test: bdev_io_ext ...passed 00:20:17.655 Test: bdev_io_ext_no_opts ...passed 00:20:17.913 Test: bdev_io_ext_invalid_opts ...passed 00:20:17.913 Test: bdev_io_ext_split ...passed 00:20:17.913 Test: bdev_io_ext_bounce_buffer ...passed 00:20:17.913 Test: bdev_register_uuid_alias ...[2024-04-18 19:13:33.733286] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4547:bdev_name_add: *ERROR*: Bdev name 937b2f7e-024a-46af-8298-7840067da90a already exists 00:20:17.913 [2024-04-18 19:13:33.733544] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7650:bdev_register: *ERROR*: Unable to add uuid:937b2f7e-024a-46af-8298-7840067da90a alias for bdev bdev0 00:20:17.913 passed 00:20:17.913 Test: bdev_unregister_by_name ...[2024-04-18 19:13:33.759988] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7883:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:20:17.913 [2024-04-18 19:13:33.760126] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7891:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:20:17.913 passed 00:20:17.913 Test: for_each_bdev_test ...passed 00:20:17.913 Test: bdev_seek_test ...passed 00:20:18.172 Test: bdev_copy ...passed 00:20:18.172 Test: bdev_copy_split_test ...passed 00:20:18.172 Test: examine_locks ...passed 00:20:18.172 Test: claim_v2_rwo ...[2024-04-18 19:13:33.917339] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7987:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.917462] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8617:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.917558] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.917679] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.917759] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8454:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:20:18.172 passed[2024-04-18 19:13:33.917830] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8612:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:20:18.172 00:20:18.172 Test: claim_v2_rom ...[2024-04-18 19:13:33.918134] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7987:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.918278] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.918382] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.918447] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8454:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.918542] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8655:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:20:18.172 [2024-04-18 19:13:33.918650] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8650:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:20:18.172 passed 00:20:18.172 Test: claim_v2_rwm ...[2024-04-18 19:13:33.918986] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8685:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:20:18.172 [2024-04-18 19:13:33.919143] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7987:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.919240] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.919320] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.919416] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8454:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.919469] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8705:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.919589] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8685:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:20:18.172 passed 00:20:18.172 Test: claim_v2_existing_writer ...[2024-04-18 19:13:33.919845] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8650:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:20:18.172 passed[2024-04-18 19:13:33.920009] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8650:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:20:18.172 00:20:18.172 Test: claim_v2_existing_v1 ...[2024-04-18 19:13:33.920272] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.920436] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.920480] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:20:18.172 passed 00:20:18.172 Test: claim_v1_existing_v2 ...[2024-04-18 19:13:33.920760] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8454:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.920877] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8454:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:20:18.172 [2024-04-18 19:13:33.920937] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8454:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:20:18.172 passed 00:20:18.172 Test: examine_claimed ...[2024-04-18 19:13:33.921488] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8782:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:20:18.172 passed 00:20:18.172 00:20:18.172 Run Summary: Type Total Ran Passed Failed Inactive 00:20:18.172 suites 1 1 n/a 0 0 00:20:18.172 tests 59 59 59 0 0 00:20:18.172 asserts 4599 4599 4599 0 n/a 00:20:18.172 00:20:18.172 Elapsed time = 2.085 seconds 00:20:18.172 19:13:33 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:20:18.172 00:20:18.172 00:20:18.172 CUnit - A unit testing framework for C - Version 2.1-3 00:20:18.172 http://cunit.sourceforge.net/ 00:20:18.172 00:20:18.172 00:20:18.172 Suite: nvme 00:20:18.172 Test: test_create_ctrlr ...passed 00:20:18.172 Test: test_reset_ctrlr ...[2024-04-18 19:13:33.982048] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.172 passed 00:20:18.172 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:20:18.172 Test: test_failover_ctrlr ...passed 00:20:18.172 Test: test_race_between_failover_and_add_secondary_trid ...[2024-04-18 19:13:33.985746] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.172 [2024-04-18 19:13:33.986086] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.172 [2024-04-18 19:13:33.986419] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.172 passed 00:20:18.172 Test: test_pending_reset ...[2024-04-18 19:13:33.988476] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.172 [2024-04-18 19:13:33.988867] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.172 passed 00:20:18.172 Test: test_attach_ctrlr ...[2024-04-18 19:13:33.990473] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4273:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:18.172 passed 00:20:18.172 Test: test_aer_cb ...passed 00:20:18.172 Test: test_submit_nvme_cmd ...passed 00:20:18.172 Test: test_add_remove_trid ...passed 00:20:18.172 Test: test_abort ...[2024-04-18 19:13:33.995127] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7399:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:20:18.172 passed 00:20:18.173 Test: test_get_io_qpair ...passed 00:20:18.173 Test: test_bdev_unregister ...passed 00:20:18.173 Test: test_compare_ns ...passed 00:20:18.173 Test: test_init_ana_log_page ...passed 00:20:18.173 Test: test_get_memory_domains ...passed 00:20:18.173 Test: test_reconnect_qpair ...[2024-04-18 19:13:33.999319] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 passed 00:20:18.173 Test: test_create_bdev_ctrlr ...[2024-04-18 19:13:34.000243] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5325:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:20:18.173 passed 00:20:18.173 Test: test_add_multi_ns_to_bdev ...[2024-04-18 19:13:34.001892] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4529:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:20:18.173 passed 00:20:18.173 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:20:18.173 Test: test_admin_path ...passed 00:20:18.173 Test: test_reset_bdev_ctrlr ...passed 00:20:18.173 Test: test_find_io_path ...passed 00:20:18.173 Test: test_retry_io_if_ana_state_is_updating ...passed 00:20:18.173 Test: test_retry_io_for_io_path_error ...passed 00:20:18.173 Test: test_retry_io_count ...passed 00:20:18.173 Test: test_concurrent_read_ana_log_page ...passed 00:20:18.173 Test: test_retry_io_for_ana_error ...passed 00:20:18.173 Test: test_check_io_error_resiliency_params ...[2024-04-18 19:13:34.011598] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6019:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:20:18.173 [2024-04-18 19:13:34.011775] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6023:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:20:18.173 [2024-04-18 19:13:34.011909] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6032:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:20:18.173 [2024-04-18 19:13:34.011983] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6035:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:20:18.173 [2024-04-18 19:13:34.012107] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6047:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:20:18.173 [2024-04-18 19:13:34.012249] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6047:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:20:18.173 [2024-04-18 19:13:34.012305] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6027:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:20:18.173 [2024-04-18 19:13:34.012358] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6042:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:20:18.173 [2024-04-18 19:13:34.012384] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6039:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:20:18.173 passed 00:20:18.173 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:20:18.173 Test: test_reconnect_ctrlr ...[2024-04-18 19:13:34.013120] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 [2024-04-18 19:13:34.013406] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 [2024-04-18 19:13:34.013814] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 [2024-04-18 19:13:34.014030] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 [2024-04-18 19:13:34.014231] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 passed 00:20:18.173 Test: test_retry_failover_ctrlr ...[2024-04-18 19:13:34.014710] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 passed 00:20:18.173 Test: test_fail_path ...[2024-04-18 19:13:34.015547] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 [2024-04-18 19:13:34.015814] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 [2024-04-18 19:13:34.016087] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 [2024-04-18 19:13:34.016354] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 [2024-04-18 19:13:34.016616] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 passed 00:20:18.173 Test: test_nvme_ns_cmp ...passed 00:20:18.173 Test: test_ana_transition ...passed 00:20:18.173 Test: test_set_preferred_path ...passed 00:20:18.173 Test: test_find_next_io_path ...passed 00:20:18.173 Test: test_find_io_path_min_qd ...passed 00:20:18.173 Test: test_disable_auto_failback ...[2024-04-18 19:13:34.019490] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 passed 00:20:18.173 Test: test_set_multipath_policy ...passed 00:20:18.173 Test: test_uuid_generation ...passed 00:20:18.173 Test: test_retry_io_to_same_path ...passed 00:20:18.173 Test: test_race_between_reset_and_disconnected ...passed 00:20:18.173 Test: test_ctrlr_op_rpc ...passed 00:20:18.173 Test: test_bdev_ctrlr_op_rpc ...passed 00:20:18.173 Test: test_disable_enable_ctrlr ...[2024-04-18 19:13:34.025438] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 [2024-04-18 19:13:34.025798] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.173 passed 00:20:18.173 Test: test_delete_ctrlr_done ...passed 00:20:18.173 Test: test_ns_remove_during_reset ...passed 00:20:18.173 00:20:18.173 Run Summary: Type Total Ran Passed Failed Inactive 00:20:18.173 suites 1 1 n/a 0 0 00:20:18.173 tests 48 48 48 0 0 00:20:18.173 asserts 3565 3565 3565 0 n/a 00:20:18.173 00:20:18.173 Elapsed time = 0.037 seconds 00:20:18.173 19:13:34 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:20:18.173 00:20:18.173 00:20:18.173 CUnit - A unit testing framework for C - Version 2.1-3 00:20:18.173 http://cunit.sourceforge.net/ 00:20:18.173 00:20:18.173 Test Options 00:20:18.173 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:20:18.173 00:20:18.173 Suite: raid 00:20:18.173 Test: test_create_raid ...passed 00:20:18.173 Test: test_create_raid_superblock ...passed 00:20:18.173 Test: test_delete_raid ...passed 00:20:18.173 Test: test_create_raid_invalid_args ...[2024-04-18 19:13:34.085094] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:20:18.173 [2024-04-18 19:13:34.085797] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:20:18.173 [2024-04-18 19:13:34.086457] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:20:18.173 [2024-04-18 19:13:34.086859] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:20:18.173 [2024-04-18 19:13:34.087973] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:20:18.173 passed 00:20:18.173 Test: test_delete_raid_invalid_args ...passed 00:20:18.173 Test: test_io_channel ...passed 00:20:18.173 Test: test_reset_io ...passed 00:20:18.173 Test: test_write_io ...passed 00:20:18.173 Test: test_read_io ...passed 00:20:19.548 Test: test_unmap_io ...passed 00:20:19.548 Test: test_io_failure ...[2024-04-18 19:13:35.390357] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:20:19.548 passed 00:20:19.548 Test: test_multi_raid_no_io ...passed 00:20:19.548 Test: test_multi_raid_with_io ...passed 00:20:19.548 Test: test_io_type_supported ...passed 00:20:19.548 Test: test_raid_json_dump_info ...passed 00:20:19.548 Test: test_context_size ...passed 00:20:19.548 Test: test_raid_level_conversions ...passed 00:20:19.548 Test: test_raid_io_split ...passedTest Options 00:20:19.548 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 1 00:20:19.548 00:20:19.548 Suite: raid_dif 00:20:19.548 Test: test_create_raid ...passed 00:20:19.548 Test: test_create_raid_superblock ...passed 00:20:19.548 Test: test_delete_raid ...passed 00:20:19.548 Test: test_create_raid_invalid_args ...[2024-04-18 19:13:35.400183] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:20:19.548 [2024-04-18 19:13:35.400382] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:20:19.548 [2024-04-18 19:13:35.400678] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:20:19.548 [2024-04-18 19:13:35.400825] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:20:19.548 [2024-04-18 19:13:35.401475] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:20:19.548 passed 00:20:19.548 Test: test_delete_raid_invalid_args ...passed 00:20:19.548 Test: test_io_channel ...passed 00:20:19.548 Test: test_reset_io ...passed 00:20:19.548 Test: test_write_io ...passed 00:20:19.548 Test: test_read_io ...passed 00:20:20.986 Test: test_unmap_io ...passed 00:20:20.986 Test: test_io_failure ...[2024-04-18 19:13:36.596901] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:20:20.986 passed 00:20:20.986 Test: test_multi_raid_no_io ...passed 00:20:20.986 Test: test_multi_raid_with_io ...passed 00:20:20.986 Test: test_io_type_supported ...passed 00:20:20.986 Test: test_raid_json_dump_info ...passed 00:20:20.986 Test: test_context_size ...passed 00:20:20.986 Test: test_raid_level_conversions ...passed 00:20:20.986 Test: test_raid_io_split ...passedTest Options 00:20:20.986 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:20:20.986 00:20:20.986 Suite: raid_single_run 00:20:20.986 Test: test_raid_process ...passed 00:20:20.986 00:20:20.986 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.986 suites 3 3 n/a 0 0 00:20:20.986 tests 37 37 37 0 0 00:20:20.986 asserts 355354 355354 355354 0 n/a 00:20:20.986 00:20:20.986 Elapsed time = 2.523 seconds 00:20:20.986 19:13:36 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:20:20.986 00:20:20.986 00:20:20.986 CUnit - A unit testing framework for C - Version 2.1-3 00:20:20.986 http://cunit.sourceforge.net/ 00:20:20.986 00:20:20.986 00:20:20.986 Suite: raid_sb 00:20:20.986 Test: test_raid_bdev_write_superblock ...passed 00:20:20.986 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:20:20.986 Test: test_raid_bdev_parse_superblock ...[2024-04-18 19:13:36.666503] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 141:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:20:20.986 passed 00:20:20.986 00:20:20.986 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.986 suites 1 1 n/a 0 0 00:20:20.986 tests 3 3 3 0 0 00:20:20.986 asserts 32 32 32 0 n/a 00:20:20.986 00:20:20.986 Elapsed time = 0.001 seconds 00:20:20.986 19:13:36 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:20:20.986 00:20:20.986 00:20:20.986 CUnit - A unit testing framework for C - Version 2.1-3 00:20:20.986 http://cunit.sourceforge.net/ 00:20:20.986 00:20:20.986 00:20:20.986 Suite: concat 00:20:20.986 Test: test_concat_start ...passed 00:20:20.986 Test: test_concat_rw ...passed 00:20:20.986 Test: test_concat_null_payload ...passed 00:20:20.986 00:20:20.986 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.986 suites 1 1 n/a 0 0 00:20:20.986 tests 3 3 3 0 0 00:20:20.986 asserts 8097 8097 8097 0 n/a 00:20:20.986 00:20:20.986 Elapsed time = 0.008 seconds 00:20:20.986 19:13:36 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:20:20.986 00:20:20.986 00:20:20.986 CUnit - A unit testing framework for C - Version 2.1-3 00:20:20.986 http://cunit.sourceforge.net/ 00:20:20.986 00:20:20.986 00:20:20.986 Suite: raid1 00:20:20.986 Test: test_raid1_start ...passed 00:20:20.986 Test: test_raid1_read_balancing ...passed 00:20:20.986 00:20:20.986 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.986 suites 1 1 n/a 0 0 00:20:20.986 tests 2 2 2 0 0 00:20:20.986 asserts 2856 2856 2856 0 n/a 00:20:20.986 00:20:20.986 Elapsed time = 0.003 seconds 00:20:20.986 19:13:36 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:20:20.986 00:20:20.986 00:20:20.986 CUnit - A unit testing framework for C - Version 2.1-3 00:20:20.986 http://cunit.sourceforge.net/ 00:20:20.986 00:20:20.986 00:20:20.986 Suite: zone 00:20:20.986 Test: test_zone_get_operation ...passed 00:20:20.986 Test: test_bdev_zone_get_info ...passed 00:20:20.986 Test: test_bdev_zone_management ...passed 00:20:20.986 Test: test_bdev_zone_append ...passed 00:20:20.986 Test: test_bdev_zone_append_with_md ...passed 00:20:20.986 Test: test_bdev_zone_appendv ...passed 00:20:20.986 Test: test_bdev_zone_appendv_with_md ...passed 00:20:20.986 Test: test_bdev_io_get_append_location ...passed 00:20:20.986 00:20:20.986 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.986 suites 1 1 n/a 0 0 00:20:20.986 tests 8 8 8 0 0 00:20:20.986 asserts 94 94 94 0 n/a 00:20:20.986 00:20:20.986 Elapsed time = 0.001 seconds 00:20:20.986 19:13:36 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:20:20.986 00:20:20.986 00:20:20.986 CUnit - A unit testing framework for C - Version 2.1-3 00:20:20.986 http://cunit.sourceforge.net/ 00:20:20.986 00:20:20.986 00:20:20.986 Suite: gpt_parse 00:20:20.986 Test: test_parse_mbr_and_primary ...[2024-04-18 19:13:36.841290] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:20:20.986 [2024-04-18 19:13:36.841739] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:20:20.986 [2024-04-18 19:13:36.841884] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:20:20.986 [2024-04-18 19:13:36.841983] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:20:20.986 [2024-04-18 19:13:36.842077] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:20:20.986 [2024-04-18 19:13:36.842291] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:20:20.986 passed 00:20:20.986 Test: test_parse_secondary ...[2024-04-18 19:13:36.843067] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:20:20.986 [2024-04-18 19:13:36.843212] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:20:20.986 [2024-04-18 19:13:36.843307] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:20:20.986 [2024-04-18 19:13:36.843356] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:20:20.986 passed 00:20:20.987 Test: test_check_mbr ...[2024-04-18 19:13:36.844108] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:20:20.987 [2024-04-18 19:13:36.844183] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:20:20.987 passed 00:20:20.987 Test: test_read_header ...[2024-04-18 19:13:36.844457] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:20:20.987 [2024-04-18 19:13:36.844567] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:20:20.987 passed[2024-04-18 19:13:36.844748] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:20:20.987 [2024-04-18 19:13:36.844814] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:20:20.987 [2024-04-18 19:13:36.844858] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:20:20.987 [2024-04-18 19:13:36.844901] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:20:20.987 00:20:20.987 Test: test_read_partitions ...[2024-04-18 19:13:36.845077] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:20:20.987 [2024-04-18 19:13:36.845278] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:20:20.987 [2024-04-18 19:13:36.845337] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:20:20.987 [2024-04-18 19:13:36.845375] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:20:20.987 [2024-04-18 19:13:36.845673] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:20:20.987 passed 00:20:20.987 00:20:20.987 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.987 suites 1 1 n/a 0 0 00:20:20.987 tests 5 5 5 0 0 00:20:20.987 asserts 33 33 33 0 n/a 00:20:20.987 00:20:20.987 Elapsed time = 0.003 seconds 00:20:20.987 19:13:36 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:20:20.987 00:20:20.987 00:20:20.987 CUnit - A unit testing framework for C - Version 2.1-3 00:20:20.987 http://cunit.sourceforge.net/ 00:20:20.987 00:20:20.987 00:20:20.987 Suite: bdev_part 00:20:20.987 Test: part_test ...[2024-04-18 19:13:36.877868] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4547:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:20:20.987 passed 00:20:20.987 Test: part_free_test ...passed 00:20:21.245 Test: part_get_io_channel_test ...passed 00:20:21.245 Test: part_construct_ext ...passed 00:20:21.245 00:20:21.245 Run Summary: Type Total Ran Passed Failed Inactive 00:20:21.245 suites 1 1 n/a 0 0 00:20:21.245 tests 4 4 4 0 0 00:20:21.245 asserts 48 48 48 0 n/a 00:20:21.245 00:20:21.245 Elapsed time = 0.053 seconds 00:20:21.245 19:13:36 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:20:21.245 00:20:21.245 00:20:21.245 CUnit - A unit testing framework for C - Version 2.1-3 00:20:21.245 http://cunit.sourceforge.net/ 00:20:21.245 00:20:21.245 00:20:21.245 Suite: scsi_nvme_suite 00:20:21.245 Test: scsi_nvme_translate_test ...passed 00:20:21.245 00:20:21.245 Run Summary: Type Total Ran Passed Failed Inactive 00:20:21.245 suites 1 1 n/a 0 0 00:20:21.245 tests 1 1 1 0 0 00:20:21.245 asserts 104 104 104 0 n/a 00:20:21.245 00:20:21.245 Elapsed time = 0.000 seconds 00:20:21.245 19:13:36 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:20:21.245 00:20:21.245 00:20:21.245 CUnit - A unit testing framework for C - Version 2.1-3 00:20:21.245 http://cunit.sourceforge.net/ 00:20:21.245 00:20:21.245 00:20:21.245 Suite: lvol 00:20:21.245 Test: ut_lvs_init ...[2024-04-18 19:13:37.010914] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:20:21.245 [2024-04-18 19:13:37.011601] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:20:21.245 passed 00:20:21.245 Test: ut_lvol_init ...passed 00:20:21.245 Test: ut_lvol_snapshot ...passed 00:20:21.245 Test: ut_lvol_clone ...passed 00:20:21.245 Test: ut_lvs_destroy ...passed 00:20:21.245 Test: ut_lvs_unload ...passed 00:20:21.245 Test: ut_lvol_resize ...[2024-04-18 19:13:37.014890] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:20:21.245 passed 00:20:21.245 Test: ut_lvol_set_read_only ...passed 00:20:21.245 Test: ut_lvol_hotremove ...passed 00:20:21.245 Test: ut_vbdev_lvol_get_io_channel ...passed 00:20:21.245 Test: ut_vbdev_lvol_io_type_supported ...passed 00:20:21.245 Test: ut_lvol_read_write ...passed 00:20:21.245 Test: ut_vbdev_lvol_submit_request ...passed 00:20:21.245 Test: ut_lvol_examine_config ...passed 00:20:21.246 Test: ut_lvol_examine_disk ...[2024-04-18 19:13:37.017218] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:20:21.246 passed 00:20:21.246 Test: ut_lvol_rename ...[2024-04-18 19:13:37.018794] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:20:21.246 [2024-04-18 19:13:37.019075] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:20:21.246 passed 00:20:21.246 Test: ut_bdev_finish ...passed 00:20:21.246 Test: ut_lvs_rename ...passed 00:20:21.246 Test: ut_lvol_seek ...passed 00:20:21.246 Test: ut_esnap_dev_create ...[2024-04-18 19:13:37.020946] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:20:21.246 [2024-04-18 19:13:37.021145] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:20:21.246 [2024-04-18 19:13:37.021288] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:20:21.246 [2024-04-18 19:13:37.021399] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:20:21.246 passed 00:20:21.246 Test: ut_lvol_esnap_clone_bad_args ...[2024-04-18 19:13:37.021959] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:20:21.246 [2024-04-18 19:13:37.022101] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:20:21.246 passed 00:20:21.246 00:20:21.246 Run Summary: Type Total Ran Passed Failed Inactive 00:20:21.246 suites 1 1 n/a 0 0 00:20:21.246 tests 21 21 21 0 0 00:20:21.246 asserts 758 758 758 0 n/a 00:20:21.246 00:20:21.246 Elapsed time = 0.007 seconds 00:20:21.246 19:13:37 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:20:21.246 00:20:21.246 00:20:21.246 CUnit - A unit testing framework for C - Version 2.1-3 00:20:21.246 http://cunit.sourceforge.net/ 00:20:21.246 00:20:21.246 00:20:21.246 Suite: zone_block 00:20:21.246 Test: test_zone_block_create ...passed 00:20:21.246 Test: test_zone_block_create_invalid ...[2024-04-18 19:13:37.090587] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:20:21.246 [2024-04-18 19:13:37.091093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-18 19:13:37.091431] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:20:21.246 [2024-04-18 19:13:37.091604] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-18 19:13:37.091889] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:20:21.246 [2024-04-18 19:13:37.092022] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-18 19:13:37.092214] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:20:21.246 [2024-04-18 19:13:37.092361] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:20:21.246 Test: test_get_zone_info ...[2024-04-18 19:13:37.093230] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.093419] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.093578] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 passed 00:20:21.246 Test: test_supported_io_types ...passed 00:20:21.246 Test: test_reset_zone ...[2024-04-18 19:13:37.094957] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.095127] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 passed 00:20:21.246 Test: test_open_zone ...[2024-04-18 19:13:37.096029] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.096906] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.097084] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 passed 00:20:21.246 Test: test_zone_write ...[2024-04-18 19:13:37.097861] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:20:21.246 [2024-04-18 19:13:37.098014] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.098175] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:20:21.246 [2024-04-18 19:13:37.098319] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.105047] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:20:21.246 [2024-04-18 19:13:37.105301] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.105447] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:20:21.246 [2024-04-18 19:13:37.105593] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.112129] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:20:21.246 [2024-04-18 19:13:37.112412] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 passed 00:20:21.246 Test: test_zone_read ...[2024-04-18 19:13:37.113308] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:20:21.246 [2024-04-18 19:13:37.113449] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.113629] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:20:21.246 [2024-04-18 19:13:37.113748] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.114393] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:20:21.246 [2024-04-18 19:13:37.114534] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 passed 00:20:21.246 Test: test_close_zone ...[2024-04-18 19:13:37.115265] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.115502] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.115876] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.116019] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 passed 00:20:21.246 Test: test_finish_zone ...[2024-04-18 19:13:37.117047] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.117205] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 passed 00:20:21.246 Test: test_append_zone ...[2024-04-18 19:13:37.117955] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:20:21.246 [2024-04-18 19:13:37.118105] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.118270] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:20:21.246 [2024-04-18 19:13:37.118399] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 [2024-04-18 19:13:37.131539] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:20:21.246 [2024-04-18 19:13:37.131792] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:20:21.246 passed 00:20:21.246 00:20:21.246 Run Summary: Type Total Ran Passed Failed Inactive 00:20:21.246 suites 1 1 n/a 0 0 00:20:21.246 tests 11 11 11 0 0 00:20:21.246 asserts 3437 3437 3437 0 n/a 00:20:21.246 00:20:21.246 Elapsed time = 0.038 seconds 00:20:21.505 19:13:37 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:20:21.505 00:20:21.505 00:20:21.505 CUnit - A unit testing framework for C - Version 2.1-3 00:20:21.505 http://cunit.sourceforge.net/ 00:20:21.505 00:20:21.505 00:20:21.505 Suite: bdev 00:20:21.505 Test: basic ...[2024-04-18 19:13:37.254225] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x558bea8e0ae1): Operation not permitted (rc=-1) 00:20:21.505 [2024-04-18 19:13:37.254946] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x558bea8e0aa0): Operation not permitted (rc=-1) 00:20:21.505 [2024-04-18 19:13:37.255143] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x558bea8e0ae1): Operation not permitted (rc=-1) 00:20:21.505 passed 00:20:21.505 Test: unregister_and_close ...passed 00:20:21.505 Test: unregister_and_close_different_threads ...passed 00:20:21.763 Test: basic_qos ...passed 00:20:21.763 Test: put_channel_during_reset ...passed 00:20:21.763 Test: aborted_reset ...passed 00:20:21.763 Test: aborted_reset_no_outstanding_io ...passed 00:20:22.022 Test: io_during_reset ...passed 00:20:22.022 Test: reset_completions ...passed 00:20:22.022 Test: io_during_qos_queue ...passed 00:20:22.022 Test: io_during_qos_reset ...passed 00:20:22.314 Test: enomem ...passed 00:20:22.314 Test: enomem_multi_bdev ...passed 00:20:22.314 Test: enomem_multi_bdev_unregister ...passed 00:20:22.314 Test: enomem_multi_io_target ...passed 00:20:22.314 Test: qos_dynamic_enable ...passed 00:20:22.585 Test: bdev_histograms_mt ...passed 00:20:22.585 Test: bdev_set_io_timeout_mt ...[2024-04-18 19:13:38.296997] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:20:22.585 passed 00:20:22.585 Test: lock_lba_range_then_submit_io ...[2024-04-18 19:13:38.323969] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x558bea8e0a60 already registered (old:0x6130000003c0 new:0x613000000c80) 00:20:22.585 passed 00:20:22.585 Test: unregister_during_reset ...passed 00:20:22.586 Test: event_notify_and_close ...passed 00:20:22.586 Suite: bdev_wrong_thread 00:20:22.586 Test: spdk_bdev_register_wt ...[2024-04-18 19:13:38.468973] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8411:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:20:22.586 passed 00:20:22.586 Test: spdk_bdev_examine_wt ...[2024-04-18 19:13:38.469763] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 790:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:20:22.586 passed 00:20:22.586 00:20:22.586 Run Summary: Type Total Ran Passed Failed Inactive 00:20:22.586 suites 2 2 n/a 0 0 00:20:22.586 tests 23 23 23 0 0 00:20:22.586 asserts 601 601 601 0 n/a 00:20:22.586 00:20:22.586 Elapsed time = 1.246 seconds 00:20:22.586 ************************************ 00:20:22.586 END TEST unittest_bdev 00:20:22.586 ************************************ 00:20:22.586 00:20:22.586 real 0m6.704s 00:20:22.586 user 0m2.854s 00:20:22.586 sys 0m3.794s 00:20:22.586 19:13:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:22.586 19:13:38 -- common/autotest_common.sh@10 -- # set +x 00:20:22.843 19:13:38 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:20:22.843 19:13:38 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:20:22.843 19:13:38 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:20:22.843 19:13:38 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:20:22.843 19:13:38 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:20:22.843 19:13:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:22.843 19:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:22.843 19:13:38 -- common/autotest_common.sh@10 -- # set +x 00:20:22.843 ************************************ 00:20:22.843 START TEST unittest_bdev_raid5f 00:20:22.843 ************************************ 00:20:22.843 19:13:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:20:22.843 00:20:22.843 00:20:22.843 CUnit - A unit testing framework for C - Version 2.1-3 00:20:22.843 http://cunit.sourceforge.net/ 00:20:22.843 00:20:22.843 00:20:22.843 Suite: raid5f 00:20:22.843 Test: test_raid5f_start ...passed 00:20:23.408 Test: test_raid5f_submit_read_request ...passed 00:20:23.666 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:20:27.050 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:20:45.276 Test: test_raid5f_chunk_write_error ...passed 00:20:51.836 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:20:55.116 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:21:21.663 Test: test_raid5f_submit_read_request_degraded ...passed 00:21:21.663 00:21:21.663 Run Summary: Type Total Ran Passed Failed Inactive 00:21:21.663 suites 1 1 n/a 0 0 00:21:21.663 tests 8 8 8 0 0 00:21:21.663 asserts 351864 351864 351864 0 n/a 00:21:21.663 00:21:21.663 Elapsed time = 58.525 seconds 00:21:21.663 ************************************ 00:21:21.663 END TEST unittest_bdev_raid5f 00:21:21.663 ************************************ 00:21:21.663 00:21:21.663 real 0m58.610s 00:21:21.663 user 0m55.697s 00:21:21.663 sys 0m2.905s 00:21:21.663 19:14:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:21.663 19:14:37 -- common/autotest_common.sh@10 -- # set +x 00:21:21.663 19:14:37 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:21:21.663 19:14:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:21.663 19:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:21.663 19:14:37 -- common/autotest_common.sh@10 -- # set +x 00:21:21.663 ************************************ 00:21:21.663 START TEST unittest_blob_blobfs 00:21:21.663 ************************************ 00:21:21.663 19:14:37 -- common/autotest_common.sh@1111 -- # unittest_blob 00:21:21.663 19:14:37 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:21:21.663 19:14:37 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:21:21.663 00:21:21.663 00:21:21.663 CUnit - A unit testing framework for C - Version 2.1-3 00:21:21.663 http://cunit.sourceforge.net/ 00:21:21.663 00:21:21.663 00:21:21.663 Suite: blob_nocopy_noextent 00:21:21.663 Test: blob_init ...[2024-04-18 19:14:37.344406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:21:21.663 passed 00:21:21.663 Test: blob_thin_provision ...passed 00:21:21.663 Test: blob_read_only ...passed 00:21:21.663 Test: bs_load ...[2024-04-18 19:14:37.465494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:21:21.663 passed 00:21:21.663 Test: bs_load_custom_cluster_size ...passed 00:21:21.663 Test: bs_load_after_failed_grow ...passed 00:21:21.663 Test: bs_cluster_sz ...[2024-04-18 19:14:37.497977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:21:21.663 [2024-04-18 19:14:37.498484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:21:21.663 [2024-04-18 19:14:37.498799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:21:21.663 passed 00:21:21.663 Test: bs_resize_md ...passed 00:21:21.663 Test: bs_destroy ...passed 00:21:21.663 Test: bs_type ...passed 00:21:21.663 Test: bs_super_block ...passed 00:21:21.663 Test: bs_test_recover_cluster_count ...passed 00:21:21.663 Test: bs_grow_live ...passed 00:21:21.663 Test: bs_grow_live_no_space ...passed 00:21:21.921 Test: bs_test_grow ...passed 00:21:21.921 Test: blob_serialize_test ...passed 00:21:21.921 Test: super_block_crc ...passed 00:21:21.921 Test: blob_thin_prov_write_count_io ...passed 00:21:21.921 Test: blob_thin_prov_unmap_cluster ...passed 00:21:21.921 Test: bs_load_iter_test ...passed 00:21:21.921 Test: blob_relations ...[2024-04-18 19:14:37.709910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:21.921 [2024-04-18 19:14:37.710159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:21.921 [2024-04-18 19:14:37.711039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:21.921 [2024-04-18 19:14:37.711182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:21.921 passed 00:21:21.921 Test: blob_relations2 ...[2024-04-18 19:14:37.725155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:21.921 [2024-04-18 19:14:37.725379] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:21.921 [2024-04-18 19:14:37.725469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:21.921 [2024-04-18 19:14:37.725577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:21.921 [2024-04-18 19:14:37.726866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:21.921 [2024-04-18 19:14:37.727016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:21.921 [2024-04-18 19:14:37.727432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:21.921 [2024-04-18 19:14:37.727563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:21.921 passed 00:21:21.921 Test: blob_relations3 ...passed 00:21:22.179 Test: blobstore_clean_power_failure ...passed 00:21:22.179 Test: blob_delete_snapshot_power_failure ...[2024-04-18 19:14:37.878945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:21:22.179 [2024-04-18 19:14:37.890840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:21:22.179 [2024-04-18 19:14:37.891063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:22.179 [2024-04-18 19:14:37.891126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:22.179 [2024-04-18 19:14:37.902896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:21:22.179 [2024-04-18 19:14:37.903164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:21:22.179 [2024-04-18 19:14:37.903218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:22.179 [2024-04-18 19:14:37.903403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:22.179 [2024-04-18 19:14:37.915238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:21:22.180 [2024-04-18 19:14:37.915521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:22.180 [2024-04-18 19:14:37.927384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:21:22.180 [2024-04-18 19:14:37.927716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:22.180 [2024-04-18 19:14:37.939582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:21:22.180 [2024-04-18 19:14:37.939925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:22.180 passed 00:21:22.180 Test: blob_create_snapshot_power_failure ...[2024-04-18 19:14:37.975241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:21:22.180 [2024-04-18 19:14:37.998591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:21:22.180 [2024-04-18 19:14:38.010509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:21:22.180 passed 00:21:22.180 Test: blob_io_unit ...passed 00:21:22.180 Test: blob_io_unit_compatibility ...passed 00:21:22.180 Test: blob_ext_md_pages ...passed 00:21:22.438 Test: blob_esnap_io_4096_4096 ...passed 00:21:22.438 Test: blob_esnap_io_512_512 ...passed 00:21:22.438 Test: blob_esnap_io_4096_512 ...passed 00:21:22.438 Test: blob_esnap_io_512_4096 ...passed 00:21:22.438 Suite: blob_bs_nocopy_noextent 00:21:22.438 Test: blob_open ...passed 00:21:22.438 Test: blob_create ...[2024-04-18 19:14:38.245330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:21:22.438 passed 00:21:22.438 Test: blob_create_loop ...passed 00:21:22.438 Test: blob_create_fail ...[2024-04-18 19:14:38.334788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:22.438 passed 00:21:22.696 Test: blob_create_internal ...passed 00:21:22.696 Test: blob_create_zero_extent ...passed 00:21:22.696 Test: blob_snapshot ...passed 00:21:22.696 Test: blob_clone ...passed 00:21:22.696 Test: blob_inflate ...[2024-04-18 19:14:38.509987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:21:22.696 passed 00:21:22.696 Test: blob_delete ...passed 00:21:22.696 Test: blob_resize_test ...[2024-04-18 19:14:38.574118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:21:22.696 passed 00:21:22.696 Test: channel_ops ...passed 00:21:22.955 Test: blob_super ...passed 00:21:22.955 Test: blob_rw_verify_iov ...passed 00:21:22.955 Test: blob_unmap ...passed 00:21:22.955 Test: blob_iter ...passed 00:21:22.955 Test: blob_parse_md ...passed 00:21:22.955 Test: bs_load_pending_removal ...passed 00:21:22.955 Test: bs_unload ...[2024-04-18 19:14:38.827796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:21:22.955 passed 00:21:22.955 Test: bs_usable_clusters ...passed 00:21:23.214 Test: blob_crc ...[2024-04-18 19:14:38.892046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:21:23.214 [2024-04-18 19:14:38.892347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:21:23.214 passed 00:21:23.214 Test: blob_flags ...passed 00:21:23.214 Test: bs_version ...passed 00:21:23.214 Test: blob_set_xattrs_test ...[2024-04-18 19:14:38.993018] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:23.214 [2024-04-18 19:14:38.993299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:23.214 passed 00:21:23.471 Test: blob_thin_prov_alloc ...passed 00:21:23.471 Test: blob_insert_cluster_msg_test ...passed 00:21:23.471 Test: blob_thin_prov_rw ...passed 00:21:23.471 Test: blob_thin_prov_rle ...passed 00:21:23.471 Test: blob_thin_prov_rw_iov ...passed 00:21:23.471 Test: blob_snapshot_rw ...passed 00:21:23.471 Test: blob_snapshot_rw_iov ...passed 00:21:23.729 Test: blob_inflate_rw ...passed 00:21:23.729 Test: blob_snapshot_freeze_io ...passed 00:21:23.987 Test: blob_operation_split_rw ...passed 00:21:24.253 Test: blob_operation_split_rw_iov ...passed 00:21:24.254 Test: blob_simultaneous_operations ...[2024-04-18 19:14:40.021731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:24.254 [2024-04-18 19:14:40.022049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:24.254 [2024-04-18 19:14:40.023740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:24.254 [2024-04-18 19:14:40.023963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:24.254 [2024-04-18 19:14:40.040433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:24.254 [2024-04-18 19:14:40.040773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:24.254 [2024-04-18 19:14:40.041067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:24.254 [2024-04-18 19:14:40.041240] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:24.254 passed 00:21:24.254 Test: blob_persist_test ...passed 00:21:24.254 Test: blob_decouple_snapshot ...passed 00:21:24.513 Test: blob_seek_io_unit ...passed 00:21:24.513 Test: blob_nested_freezes ...passed 00:21:24.513 Suite: blob_blob_nocopy_noextent 00:21:24.513 Test: blob_write ...passed 00:21:24.513 Test: blob_read ...passed 00:21:24.513 Test: blob_rw_verify ...passed 00:21:24.513 Test: blob_rw_verify_iov_nomem ...passed 00:21:24.513 Test: blob_rw_iov_read_only ...passed 00:21:24.513 Test: blob_xattr ...passed 00:21:24.770 Test: blob_dirty_shutdown ...passed 00:21:24.770 Test: blob_is_degraded ...passed 00:21:24.770 Suite: blob_esnap_bs_nocopy_noextent 00:21:24.770 Test: blob_esnap_create ...passed 00:21:24.771 Test: blob_esnap_thread_add_remove ...passed 00:21:24.771 Test: blob_esnap_clone_snapshot ...passed 00:21:24.771 Test: blob_esnap_clone_inflate ...passed 00:21:24.771 Test: blob_esnap_clone_decouple ...passed 00:21:25.028 Test: blob_esnap_clone_reload ...passed 00:21:25.028 Test: blob_esnap_hotplug ...passed 00:21:25.028 Suite: blob_nocopy_extent 00:21:25.028 Test: blob_init ...[2024-04-18 19:14:40.742854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:21:25.028 passed 00:21:25.028 Test: blob_thin_provision ...passed 00:21:25.028 Test: blob_read_only ...passed 00:21:25.028 Test: bs_load ...[2024-04-18 19:14:40.792786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:21:25.028 passed 00:21:25.028 Test: bs_load_custom_cluster_size ...passed 00:21:25.028 Test: bs_load_after_failed_grow ...passed 00:21:25.028 Test: bs_cluster_sz ...[2024-04-18 19:14:40.821172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:21:25.028 [2024-04-18 19:14:40.821625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:21:25.028 [2024-04-18 19:14:40.821839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:21:25.028 passed 00:21:25.028 Test: bs_resize_md ...passed 00:21:25.028 Test: bs_destroy ...passed 00:21:25.028 Test: bs_type ...passed 00:21:25.028 Test: bs_super_block ...passed 00:21:25.028 Test: bs_test_recover_cluster_count ...passed 00:21:25.028 Test: bs_grow_live ...passed 00:21:25.028 Test: bs_grow_live_no_space ...passed 00:21:25.028 Test: bs_test_grow ...passed 00:21:25.028 Test: blob_serialize_test ...passed 00:21:25.028 Test: super_block_crc ...passed 00:21:25.285 Test: blob_thin_prov_write_count_io ...passed 00:21:25.285 Test: blob_thin_prov_unmap_cluster ...passed 00:21:25.285 Test: bs_load_iter_test ...passed 00:21:25.285 Test: blob_relations ...[2024-04-18 19:14:41.014103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:25.285 [2024-04-18 19:14:41.014444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.285 [2024-04-18 19:14:41.015714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:25.285 [2024-04-18 19:14:41.015920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.285 passed 00:21:25.285 Test: blob_relations2 ...[2024-04-18 19:14:41.032253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:25.285 [2024-04-18 19:14:41.032543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.285 [2024-04-18 19:14:41.032722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:25.285 [2024-04-18 19:14:41.032894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.285 [2024-04-18 19:14:41.034549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:25.285 [2024-04-18 19:14:41.034761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.285 [2024-04-18 19:14:41.035357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:25.285 [2024-04-18 19:14:41.035546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.285 passed 00:21:25.285 Test: blob_relations3 ...passed 00:21:25.285 Test: blobstore_clean_power_failure ...passed 00:21:25.286 Test: blob_delete_snapshot_power_failure ...[2024-04-18 19:14:41.200664] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:21:25.286 [2024-04-18 19:14:41.214002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:21:25.543 [2024-04-18 19:14:41.227252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:21:25.543 [2024-04-18 19:14:41.227565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:25.543 [2024-04-18 19:14:41.227740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.543 [2024-04-18 19:14:41.240883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:21:25.543 [2024-04-18 19:14:41.241166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:21:25.543 [2024-04-18 19:14:41.241304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:25.543 [2024-04-18 19:14:41.241425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.543 [2024-04-18 19:14:41.254282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:21:25.543 [2024-04-18 19:14:41.254585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:21:25.543 [2024-04-18 19:14:41.254735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:25.543 [2024-04-18 19:14:41.254887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.543 [2024-04-18 19:14:41.268245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:21:25.543 [2024-04-18 19:14:41.268551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.543 [2024-04-18 19:14:41.281590] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:21:25.543 [2024-04-18 19:14:41.281952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.543 [2024-04-18 19:14:41.295260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:21:25.543 [2024-04-18 19:14:41.295580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:25.543 passed 00:21:25.543 Test: blob_create_snapshot_power_failure ...[2024-04-18 19:14:41.334798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:21:25.543 [2024-04-18 19:14:41.347554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:21:25.543 [2024-04-18 19:14:41.373074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:21:25.543 [2024-04-18 19:14:41.386174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:21:25.543 passed 00:21:25.543 Test: blob_io_unit ...passed 00:21:25.543 Test: blob_io_unit_compatibility ...passed 00:21:25.544 Test: blob_ext_md_pages ...passed 00:21:25.831 Test: blob_esnap_io_4096_4096 ...passed 00:21:25.831 Test: blob_esnap_io_512_512 ...passed 00:21:25.831 Test: blob_esnap_io_4096_512 ...passed 00:21:25.831 Test: blob_esnap_io_512_4096 ...passed 00:21:25.831 Suite: blob_bs_nocopy_extent 00:21:25.831 Test: blob_open ...passed 00:21:25.831 Test: blob_create ...[2024-04-18 19:14:41.644287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:21:25.831 passed 00:21:25.831 Test: blob_create_loop ...passed 00:21:25.831 Test: blob_create_fail ...[2024-04-18 19:14:41.747549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:25.831 passed 00:21:26.089 Test: blob_create_internal ...passed 00:21:26.089 Test: blob_create_zero_extent ...passed 00:21:26.089 Test: blob_snapshot ...passed 00:21:26.089 Test: blob_clone ...passed 00:21:26.089 Test: blob_inflate ...[2024-04-18 19:14:41.934820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:21:26.089 passed 00:21:26.089 Test: blob_delete ...passed 00:21:26.089 Test: blob_resize_test ...[2024-04-18 19:14:42.000194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:21:26.089 passed 00:21:26.348 Test: channel_ops ...passed 00:21:26.348 Test: blob_super ...passed 00:21:26.348 Test: blob_rw_verify_iov ...passed 00:21:26.348 Test: blob_unmap ...passed 00:21:26.348 Test: blob_iter ...passed 00:21:26.348 Test: blob_parse_md ...passed 00:21:26.348 Test: bs_load_pending_removal ...passed 00:21:26.348 Test: bs_unload ...[2024-04-18 19:14:42.260503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:21:26.348 passed 00:21:26.608 Test: bs_usable_clusters ...passed 00:21:26.608 Test: blob_crc ...[2024-04-18 19:14:42.326987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:21:26.608 [2024-04-18 19:14:42.327319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:21:26.608 passed 00:21:26.608 Test: blob_flags ...passed 00:21:26.608 Test: bs_version ...passed 00:21:26.608 Test: blob_set_xattrs_test ...[2024-04-18 19:14:42.429469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:26.608 [2024-04-18 19:14:42.429776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:26.608 passed 00:21:26.866 Test: blob_thin_prov_alloc ...passed 00:21:26.866 Test: blob_insert_cluster_msg_test ...passed 00:21:26.866 Test: blob_thin_prov_rw ...passed 00:21:26.866 Test: blob_thin_prov_rle ...passed 00:21:26.866 Test: blob_thin_prov_rw_iov ...passed 00:21:26.866 Test: blob_snapshot_rw ...passed 00:21:26.866 Test: blob_snapshot_rw_iov ...passed 00:21:27.125 Test: blob_inflate_rw ...passed 00:21:27.125 Test: blob_snapshot_freeze_io ...passed 00:21:27.383 Test: blob_operation_split_rw ...passed 00:21:27.643 Test: blob_operation_split_rw_iov ...passed 00:21:27.643 Test: blob_simultaneous_operations ...[2024-04-18 19:14:43.412622] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:27.643 [2024-04-18 19:14:43.412721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:27.643 [2024-04-18 19:14:43.413937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:27.643 [2024-04-18 19:14:43.413986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:27.643 [2024-04-18 19:14:43.426640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:27.643 [2024-04-18 19:14:43.426730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:27.643 [2024-04-18 19:14:43.426845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:27.643 [2024-04-18 19:14:43.426865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:27.643 passed 00:21:27.643 Test: blob_persist_test ...passed 00:21:27.643 Test: blob_decouple_snapshot ...passed 00:21:27.901 Test: blob_seek_io_unit ...passed 00:21:27.901 Test: blob_nested_freezes ...passed 00:21:27.901 Suite: blob_blob_nocopy_extent 00:21:27.901 Test: blob_write ...passed 00:21:27.901 Test: blob_read ...passed 00:21:27.901 Test: blob_rw_verify ...passed 00:21:27.901 Test: blob_rw_verify_iov_nomem ...passed 00:21:27.901 Test: blob_rw_iov_read_only ...passed 00:21:27.901 Test: blob_xattr ...passed 00:21:28.158 Test: blob_dirty_shutdown ...passed 00:21:28.158 Test: blob_is_degraded ...passed 00:21:28.158 Suite: blob_esnap_bs_nocopy_extent 00:21:28.158 Test: blob_esnap_create ...passed 00:21:28.158 Test: blob_esnap_thread_add_remove ...passed 00:21:28.158 Test: blob_esnap_clone_snapshot ...passed 00:21:28.159 Test: blob_esnap_clone_inflate ...passed 00:21:28.159 Test: blob_esnap_clone_decouple ...passed 00:21:28.159 Test: blob_esnap_clone_reload ...passed 00:21:28.159 Test: blob_esnap_hotplug ...passed 00:21:28.159 Suite: blob_copy_noextent 00:21:28.159 Test: blob_init ...[2024-04-18 19:14:44.084606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:21:28.416 passed 00:21:28.416 Test: blob_thin_provision ...passed 00:21:28.416 Test: blob_read_only ...passed 00:21:28.416 Test: bs_load ...[2024-04-18 19:14:44.127615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:21:28.416 passed 00:21:28.416 Test: bs_load_custom_cluster_size ...passed 00:21:28.416 Test: bs_load_after_failed_grow ...passed 00:21:28.416 Test: bs_cluster_sz ...[2024-04-18 19:14:44.150671] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:21:28.416 [2024-04-18 19:14:44.150873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:21:28.416 [2024-04-18 19:14:44.150914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:21:28.416 passed 00:21:28.416 Test: bs_resize_md ...passed 00:21:28.416 Test: bs_destroy ...passed 00:21:28.416 Test: bs_type ...passed 00:21:28.416 Test: bs_super_block ...passed 00:21:28.416 Test: bs_test_recover_cluster_count ...passed 00:21:28.416 Test: bs_grow_live ...passed 00:21:28.416 Test: bs_grow_live_no_space ...passed 00:21:28.416 Test: bs_test_grow ...passed 00:21:28.416 Test: blob_serialize_test ...passed 00:21:28.416 Test: super_block_crc ...passed 00:21:28.416 Test: blob_thin_prov_write_count_io ...passed 00:21:28.416 Test: blob_thin_prov_unmap_cluster ...passed 00:21:28.416 Test: bs_load_iter_test ...passed 00:21:28.416 Test: blob_relations ...[2024-04-18 19:14:44.324046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:28.416 [2024-04-18 19:14:44.324148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.416 [2024-04-18 19:14:44.324646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:28.416 [2024-04-18 19:14:44.324688] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.416 passed 00:21:28.416 Test: blob_relations2 ...[2024-04-18 19:14:44.337348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:28.416 [2024-04-18 19:14:44.337428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.416 [2024-04-18 19:14:44.337451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:28.417 [2024-04-18 19:14:44.337464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.417 [2024-04-18 19:14:44.338271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:28.417 [2024-04-18 19:14:44.338323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.417 [2024-04-18 19:14:44.338565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:28.417 [2024-04-18 19:14:44.338597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.417 passed 00:21:28.674 Test: blob_relations3 ...passed 00:21:28.674 Test: blobstore_clean_power_failure ...passed 00:21:28.674 Test: blob_delete_snapshot_power_failure ...[2024-04-18 19:14:44.485234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:21:28.674 [2024-04-18 19:14:44.499604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:21:28.674 [2024-04-18 19:14:44.499713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:28.674 [2024-04-18 19:14:44.499744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.674 [2024-04-18 19:14:44.511120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:21:28.674 [2024-04-18 19:14:44.511202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:21:28.674 [2024-04-18 19:14:44.511221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:28.674 [2024-04-18 19:14:44.511254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.674 [2024-04-18 19:14:44.523124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:21:28.674 [2024-04-18 19:14:44.523253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.674 [2024-04-18 19:14:44.534721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:21:28.674 [2024-04-18 19:14:44.534821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.674 [2024-04-18 19:14:44.546353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:21:28.674 [2024-04-18 19:14:44.546442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:28.674 passed 00:21:28.674 Test: blob_create_snapshot_power_failure ...[2024-04-18 19:14:44.580774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:21:28.674 [2024-04-18 19:14:44.602974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:21:28.932 [2024-04-18 19:14:44.614361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:21:28.932 passed 00:21:28.932 Test: blob_io_unit ...passed 00:21:28.932 Test: blob_io_unit_compatibility ...passed 00:21:28.932 Test: blob_ext_md_pages ...passed 00:21:28.932 Test: blob_esnap_io_4096_4096 ...passed 00:21:28.932 Test: blob_esnap_io_512_512 ...passed 00:21:28.932 Test: blob_esnap_io_4096_512 ...passed 00:21:28.932 Test: blob_esnap_io_512_4096 ...passed 00:21:28.932 Suite: blob_bs_copy_noextent 00:21:28.932 Test: blob_open ...passed 00:21:28.933 Test: blob_create ...[2024-04-18 19:14:44.844276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:21:28.933 passed 00:21:29.190 Test: blob_create_loop ...passed 00:21:29.190 Test: blob_create_fail ...[2024-04-18 19:14:44.933427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:29.190 passed 00:21:29.190 Test: blob_create_internal ...passed 00:21:29.190 Test: blob_create_zero_extent ...passed 00:21:29.190 Test: blob_snapshot ...passed 00:21:29.191 Test: blob_clone ...passed 00:21:29.191 Test: blob_inflate ...[2024-04-18 19:14:45.100094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:21:29.191 passed 00:21:29.448 Test: blob_delete ...passed 00:21:29.448 Test: blob_resize_test ...[2024-04-18 19:14:45.163435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:21:29.448 passed 00:21:29.448 Test: channel_ops ...passed 00:21:29.448 Test: blob_super ...passed 00:21:29.448 Test: blob_rw_verify_iov ...passed 00:21:29.448 Test: blob_unmap ...passed 00:21:29.448 Test: blob_iter ...passed 00:21:29.448 Test: blob_parse_md ...passed 00:21:29.706 Test: bs_load_pending_removal ...passed 00:21:29.706 Test: bs_unload ...[2024-04-18 19:14:45.418378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:21:29.706 passed 00:21:29.706 Test: bs_usable_clusters ...passed 00:21:29.706 Test: blob_crc ...[2024-04-18 19:14:45.482206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:21:29.706 [2024-04-18 19:14:45.482320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:21:29.706 passed 00:21:29.706 Test: blob_flags ...passed 00:21:29.706 Test: bs_version ...passed 00:21:29.706 Test: blob_set_xattrs_test ...[2024-04-18 19:14:45.579302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:29.706 [2024-04-18 19:14:45.579446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:29.706 passed 00:21:29.964 Test: blob_thin_prov_alloc ...passed 00:21:29.964 Test: blob_insert_cluster_msg_test ...passed 00:21:29.964 Test: blob_thin_prov_rw ...passed 00:21:29.964 Test: blob_thin_prov_rle ...passed 00:21:29.964 Test: blob_thin_prov_rw_iov ...passed 00:21:30.222 Test: blob_snapshot_rw ...passed 00:21:30.222 Test: blob_snapshot_rw_iov ...passed 00:21:30.520 Test: blob_inflate_rw ...passed 00:21:30.520 Test: blob_snapshot_freeze_io ...passed 00:21:30.520 Test: blob_operation_split_rw ...passed 00:21:30.793 Test: blob_operation_split_rw_iov ...passed 00:21:30.793 Test: blob_simultaneous_operations ...[2024-04-18 19:14:46.509388] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:30.794 [2024-04-18 19:14:46.509485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:30.794 [2024-04-18 19:14:46.509911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:30.794 [2024-04-18 19:14:46.509963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:30.794 [2024-04-18 19:14:46.512482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:30.794 [2024-04-18 19:14:46.512534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:30.794 [2024-04-18 19:14:46.512622] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:30.794 [2024-04-18 19:14:46.512638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:30.794 passed 00:21:30.794 Test: blob_persist_test ...passed 00:21:30.794 Test: blob_decouple_snapshot ...passed 00:21:30.794 Test: blob_seek_io_unit ...passed 00:21:30.794 Test: blob_nested_freezes ...passed 00:21:30.794 Suite: blob_blob_copy_noextent 00:21:30.794 Test: blob_write ...passed 00:21:31.052 Test: blob_read ...passed 00:21:31.052 Test: blob_rw_verify ...passed 00:21:31.052 Test: blob_rw_verify_iov_nomem ...passed 00:21:31.052 Test: blob_rw_iov_read_only ...passed 00:21:31.052 Test: blob_xattr ...passed 00:21:31.052 Test: blob_dirty_shutdown ...passed 00:21:31.052 Test: blob_is_degraded ...passed 00:21:31.052 Suite: blob_esnap_bs_copy_noextent 00:21:31.052 Test: blob_esnap_create ...passed 00:21:31.309 Test: blob_esnap_thread_add_remove ...passed 00:21:31.309 Test: blob_esnap_clone_snapshot ...passed 00:21:31.309 Test: blob_esnap_clone_inflate ...passed 00:21:31.309 Test: blob_esnap_clone_decouple ...passed 00:21:31.309 Test: blob_esnap_clone_reload ...passed 00:21:31.309 Test: blob_esnap_hotplug ...passed 00:21:31.309 Suite: blob_copy_extent 00:21:31.309 Test: blob_init ...[2024-04-18 19:14:47.157536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:21:31.309 passed 00:21:31.309 Test: blob_thin_provision ...passed 00:21:31.309 Test: blob_read_only ...passed 00:21:31.309 Test: bs_load ...[2024-04-18 19:14:47.203722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:21:31.309 passed 00:21:31.309 Test: bs_load_custom_cluster_size ...passed 00:21:31.309 Test: bs_load_after_failed_grow ...passed 00:21:31.309 Test: bs_cluster_sz ...[2024-04-18 19:14:47.226925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:21:31.309 [2024-04-18 19:14:47.227115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:21:31.309 [2024-04-18 19:14:47.227150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:21:31.309 passed 00:21:31.566 Test: bs_resize_md ...passed 00:21:31.566 Test: bs_destroy ...passed 00:21:31.566 Test: bs_type ...passed 00:21:31.566 Test: bs_super_block ...passed 00:21:31.566 Test: bs_test_recover_cluster_count ...passed 00:21:31.566 Test: bs_grow_live ...passed 00:21:31.566 Test: bs_grow_live_no_space ...passed 00:21:31.566 Test: bs_test_grow ...passed 00:21:31.566 Test: blob_serialize_test ...passed 00:21:31.566 Test: super_block_crc ...passed 00:21:31.566 Test: blob_thin_prov_write_count_io ...passed 00:21:31.566 Test: blob_thin_prov_unmap_cluster ...passed 00:21:31.566 Test: bs_load_iter_test ...passed 00:21:31.566 Test: blob_relations ...[2024-04-18 19:14:47.389692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:31.566 [2024-04-18 19:14:47.389781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.566 [2024-04-18 19:14:47.390331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:31.566 [2024-04-18 19:14:47.390383] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.566 passed 00:21:31.566 Test: blob_relations2 ...[2024-04-18 19:14:47.402847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:31.566 [2024-04-18 19:14:47.402938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.566 [2024-04-18 19:14:47.402964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:31.566 [2024-04-18 19:14:47.402978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.566 [2024-04-18 19:14:47.403848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:31.566 [2024-04-18 19:14:47.403890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.566 [2024-04-18 19:14:47.404144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:21:31.566 [2024-04-18 19:14:47.404179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.566 passed 00:21:31.566 Test: blob_relations3 ...passed 00:21:31.824 Test: blobstore_clean_power_failure ...passed 00:21:31.824 Test: blob_delete_snapshot_power_failure ...[2024-04-18 19:14:47.552910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:21:31.824 [2024-04-18 19:14:47.564414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:21:31.824 [2024-04-18 19:14:47.576095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:21:31.824 [2024-04-18 19:14:47.576176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:31.824 [2024-04-18 19:14:47.576199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.824 [2024-04-18 19:14:47.587594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:21:31.824 [2024-04-18 19:14:47.587673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:21:31.824 [2024-04-18 19:14:47.587733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:31.824 [2024-04-18 19:14:47.587760] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.824 [2024-04-18 19:14:47.599106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:21:31.824 [2024-04-18 19:14:47.599202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:21:31.824 [2024-04-18 19:14:47.599244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:21:31.824 [2024-04-18 19:14:47.599264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.824 [2024-04-18 19:14:47.610695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:21:31.824 [2024-04-18 19:14:47.610799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.824 [2024-04-18 19:14:47.622295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:21:31.824 [2024-04-18 19:14:47.622402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.824 [2024-04-18 19:14:47.633923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:21:31.824 [2024-04-18 19:14:47.634016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:31.824 passed 00:21:31.824 Test: blob_create_snapshot_power_failure ...[2024-04-18 19:14:47.668367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:21:31.824 [2024-04-18 19:14:47.679647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:21:31.824 [2024-04-18 19:14:47.702105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:21:31.824 [2024-04-18 19:14:47.713407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:21:31.824 passed 00:21:32.081 Test: blob_io_unit ...passed 00:21:32.081 Test: blob_io_unit_compatibility ...passed 00:21:32.081 Test: blob_ext_md_pages ...passed 00:21:32.081 Test: blob_esnap_io_4096_4096 ...passed 00:21:32.081 Test: blob_esnap_io_512_512 ...passed 00:21:32.081 Test: blob_esnap_io_4096_512 ...passed 00:21:32.081 Test: blob_esnap_io_512_4096 ...passed 00:21:32.081 Suite: blob_bs_copy_extent 00:21:32.081 Test: blob_open ...passed 00:21:32.081 Test: blob_create ...[2024-04-18 19:14:47.941073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:21:32.081 passed 00:21:32.081 Test: blob_create_loop ...passed 00:21:32.338 Test: blob_create_fail ...[2024-04-18 19:14:48.031346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:32.338 passed 00:21:32.338 Test: blob_create_internal ...passed 00:21:32.338 Test: blob_create_zero_extent ...passed 00:21:32.338 Test: blob_snapshot ...passed 00:21:32.338 Test: blob_clone ...passed 00:21:32.338 Test: blob_inflate ...[2024-04-18 19:14:48.192070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:21:32.338 passed 00:21:32.338 Test: blob_delete ...passed 00:21:32.338 Test: blob_resize_test ...[2024-04-18 19:14:48.253259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:21:32.338 passed 00:21:32.596 Test: channel_ops ...passed 00:21:32.596 Test: blob_super ...passed 00:21:32.596 Test: blob_rw_verify_iov ...passed 00:21:32.596 Test: blob_unmap ...passed 00:21:32.596 Test: blob_iter ...passed 00:21:32.596 Test: blob_parse_md ...passed 00:21:32.596 Test: bs_load_pending_removal ...passed 00:21:32.596 Test: bs_unload ...[2024-04-18 19:14:48.502674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:21:32.596 passed 00:21:32.853 Test: bs_usable_clusters ...passed 00:21:32.853 Test: blob_crc ...[2024-04-18 19:14:48.566188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:21:32.853 [2024-04-18 19:14:48.566278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:21:32.853 passed 00:21:32.853 Test: blob_flags ...passed 00:21:32.853 Test: bs_version ...passed 00:21:32.853 Test: blob_set_xattrs_test ...[2024-04-18 19:14:48.665673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:32.853 [2024-04-18 19:14:48.665780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:21:32.853 passed 00:21:33.111 Test: blob_thin_prov_alloc ...passed 00:21:33.111 Test: blob_insert_cluster_msg_test ...passed 00:21:33.111 Test: blob_thin_prov_rw ...passed 00:21:33.111 Test: blob_thin_prov_rle ...passed 00:21:33.111 Test: blob_thin_prov_rw_iov ...passed 00:21:33.111 Test: blob_snapshot_rw ...passed 00:21:33.111 Test: blob_snapshot_rw_iov ...passed 00:21:33.368 Test: blob_inflate_rw ...passed 00:21:33.368 Test: blob_snapshot_freeze_io ...passed 00:21:33.627 Test: blob_operation_split_rw ...passed 00:21:33.627 Test: blob_operation_split_rw_iov ...passed 00:21:33.627 Test: blob_simultaneous_operations ...[2024-04-18 19:14:49.543111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:33.627 [2024-04-18 19:14:49.543232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:33.627 [2024-04-18 19:14:49.543652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:33.627 [2024-04-18 19:14:49.543707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:33.627 [2024-04-18 19:14:49.546036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:33.627 [2024-04-18 19:14:49.546088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:33.627 [2024-04-18 19:14:49.546172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:21:33.627 [2024-04-18 19:14:49.546189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:21:33.885 passed 00:21:33.885 Test: blob_persist_test ...passed 00:21:33.885 Test: blob_decouple_snapshot ...passed 00:21:33.885 Test: blob_seek_io_unit ...passed 00:21:33.885 Test: blob_nested_freezes ...passed 00:21:33.885 Suite: blob_blob_copy_extent 00:21:33.885 Test: blob_write ...passed 00:21:33.885 Test: blob_read ...passed 00:21:33.885 Test: blob_rw_verify ...passed 00:21:34.143 Test: blob_rw_verify_iov_nomem ...passed 00:21:34.143 Test: blob_rw_iov_read_only ...passed 00:21:34.143 Test: blob_xattr ...passed 00:21:34.143 Test: blob_dirty_shutdown ...passed 00:21:34.143 Test: blob_is_degraded ...passed 00:21:34.143 Suite: blob_esnap_bs_copy_extent 00:21:34.143 Test: blob_esnap_create ...passed 00:21:34.143 Test: blob_esnap_thread_add_remove ...passed 00:21:34.143 Test: blob_esnap_clone_snapshot ...passed 00:21:34.401 Test: blob_esnap_clone_inflate ...passed 00:21:34.401 Test: blob_esnap_clone_decouple ...passed 00:21:34.401 Test: blob_esnap_clone_reload ...passed 00:21:34.401 Test: blob_esnap_hotplug ...passed 00:21:34.401 00:21:34.401 Run Summary: Type Total Ran Passed Failed Inactive 00:21:34.401 suites 16 16 n/a 0 0 00:21:34.401 tests 352 352 352 0 0 00:21:34.401 asserts 93211 93211 93211 0 n/a 00:21:34.401 00:21:34.401 Elapsed time = 12.771 seconds 00:21:34.401 19:14:50 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:21:34.401 00:21:34.401 00:21:34.401 CUnit - A unit testing framework for C - Version 2.1-3 00:21:34.401 http://cunit.sourceforge.net/ 00:21:34.401 00:21:34.401 00:21:34.401 Suite: blob_bdev 00:21:34.401 Test: create_bs_dev ...passed 00:21:34.401 Test: create_bs_dev_ro ...[2024-04-18 19:14:50.284549] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:21:34.401 passed 00:21:34.401 Test: create_bs_dev_rw ...passed 00:21:34.401 Test: claim_bs_dev ...[2024-04-18 19:14:50.285635] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:21:34.401 passed 00:21:34.401 Test: claim_bs_dev_ro ...passed 00:21:34.401 Test: deferred_destroy_refs ...passed 00:21:34.401 Test: deferred_destroy_channels ...passed 00:21:34.401 Test: deferred_destroy_threads ...passed 00:21:34.401 00:21:34.401 Run Summary: Type Total Ran Passed Failed Inactive 00:21:34.401 suites 1 1 n/a 0 0 00:21:34.401 tests 8 8 8 0 0 00:21:34.401 asserts 119 119 119 0 n/a 00:21:34.401 00:21:34.401 Elapsed time = 0.001 seconds 00:21:34.401 19:14:50 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:21:34.401 00:21:34.401 00:21:34.401 CUnit - A unit testing framework for C - Version 2.1-3 00:21:34.401 http://cunit.sourceforge.net/ 00:21:34.401 00:21:34.401 00:21:34.401 Suite: tree 00:21:34.401 Test: blobfs_tree_op_test ...passed 00:21:34.401 00:21:34.401 Run Summary: Type Total Ran Passed Failed Inactive 00:21:34.401 suites 1 1 n/a 0 0 00:21:34.401 tests 1 1 1 0 0 00:21:34.401 asserts 27 27 27 0 n/a 00:21:34.401 00:21:34.401 Elapsed time = 0.000 seconds 00:21:34.659 19:14:50 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:21:34.659 00:21:34.659 00:21:34.659 CUnit - A unit testing framework for C - Version 2.1-3 00:21:34.659 http://cunit.sourceforge.net/ 00:21:34.659 00:21:34.659 00:21:34.659 Suite: blobfs_async_ut 00:21:34.659 Test: fs_init ...passed 00:21:34.659 Test: fs_open ...passed 00:21:34.659 Test: fs_create ...passed 00:21:34.659 Test: fs_truncate ...passed 00:21:34.659 Test: fs_rename ...[2024-04-18 19:14:50.485362] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:21:34.659 passed 00:21:34.659 Test: fs_rw_async ...passed 00:21:34.659 Test: fs_writev_readv_async ...passed 00:21:34.659 Test: tree_find_buffer_ut ...passed 00:21:34.659 Test: channel_ops ...passed 00:21:34.659 Test: channel_ops_sync ...passed 00:21:34.659 00:21:34.659 Run Summary: Type Total Ran Passed Failed Inactive 00:21:34.659 suites 1 1 n/a 0 0 00:21:34.659 tests 10 10 10 0 0 00:21:34.659 asserts 292 292 292 0 n/a 00:21:34.659 00:21:34.659 Elapsed time = 0.162 seconds 00:21:34.659 19:14:50 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:21:34.918 00:21:34.918 00:21:34.918 CUnit - A unit testing framework for C - Version 2.1-3 00:21:34.918 http://cunit.sourceforge.net/ 00:21:34.918 00:21:34.918 00:21:34.918 Suite: blobfs_sync_ut 00:21:34.918 Test: cache_read_after_write ...[2024-04-18 19:14:50.680861] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:21:34.918 passed 00:21:34.918 Test: file_length ...passed 00:21:34.918 Test: append_write_to_extend_blob ...passed 00:21:34.918 Test: partial_buffer ...passed 00:21:34.918 Test: cache_write_null_buffer ...passed 00:21:34.918 Test: fs_create_sync ...passed 00:21:34.918 Test: fs_rename_sync ...passed 00:21:34.918 Test: cache_append_no_cache ...passed 00:21:34.918 Test: fs_delete_file_without_close ...passed 00:21:34.918 00:21:34.918 Run Summary: Type Total Ran Passed Failed Inactive 00:21:34.918 suites 1 1 n/a 0 0 00:21:34.918 tests 9 9 9 0 0 00:21:34.918 asserts 345 345 345 0 n/a 00:21:34.918 00:21:34.918 Elapsed time = 0.398 seconds 00:21:35.177 19:14:50 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:21:35.177 00:21:35.177 00:21:35.177 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.177 http://cunit.sourceforge.net/ 00:21:35.177 00:21:35.177 00:21:35.177 Suite: blobfs_bdev_ut 00:21:35.177 Test: spdk_blobfs_bdev_detect_test ...[2024-04-18 19:14:50.873598] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:21:35.177 passed 00:21:35.177 Test: spdk_blobfs_bdev_create_test ...[2024-04-18 19:14:50.874042] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:21:35.177 passed 00:21:35.177 Test: spdk_blobfs_bdev_mount_test ...passed 00:21:35.177 00:21:35.177 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.177 suites 1 1 n/a 0 0 00:21:35.177 tests 3 3 3 0 0 00:21:35.177 asserts 9 9 9 0 n/a 00:21:35.177 00:21:35.177 Elapsed time = 0.001 seconds 00:21:35.177 00:21:35.177 real 0m13.586s 00:21:35.177 user 0m12.937s 00:21:35.177 sys 0m0.801s 00:21:35.177 19:14:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:35.177 19:14:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.177 ************************************ 00:21:35.177 END TEST unittest_blob_blobfs 00:21:35.177 ************************************ 00:21:35.177 19:14:50 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:21:35.177 19:14:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:35.177 19:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:35.177 19:14:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.177 ************************************ 00:21:35.177 START TEST unittest_event 00:21:35.177 ************************************ 00:21:35.177 19:14:50 -- common/autotest_common.sh@1111 -- # unittest_event 00:21:35.177 19:14:50 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:21:35.177 00:21:35.177 00:21:35.177 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.177 http://cunit.sourceforge.net/ 00:21:35.177 00:21:35.177 00:21:35.177 Suite: app_suite 00:21:35.177 Test: test_spdk_app_parse_args ...app_ut [options] 00:21:35.177 00:21:35.177 CPU options:app_ut: invalid option -- 'z' 00:21:35.177 00:21:35.177 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:21:35.177 (like [0,1,10]) 00:21:35.177 --lcores lcore to CPU mapping list. The list is in the format: 00:21:35.177 [<,lcores[@CPUs]>...] 00:21:35.177 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:21:35.177 Within the group, '-' is used for range separator, 00:21:35.177 ',' is used for single number separator. 00:21:35.177 '( )' can be omitted for single element group, 00:21:35.177 '@' can be omitted if cpus and lcores have the same value 00:21:35.177 --disable-cpumask-locks Disable CPU core lock files. 00:21:35.177 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:21:35.177 pollers in the app support interrupt mode) 00:21:35.177 -p, --main-core main (primary) core for DPDK 00:21:35.177 00:21:35.177 Configuration options: 00:21:35.177 -c, --config, --json JSON config file 00:21:35.177 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:21:35.177 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:21:35.177 --wait-for-rpc wait for RPCs to initialize subsystems 00:21:35.177 --rpcs-allowed comma-separated list of permitted RPCS 00:21:35.177 --json-ignore-init-errors don't exit on invalid config entry 00:21:35.177 00:21:35.177 Memory options: 00:21:35.177 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:21:35.177 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:21:35.177 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:21:35.177 -R, --huge-unlink unlink huge files after initialization 00:21:35.177 -n, --mem-channels number of memory channels used for DPDK 00:21:35.177 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:21:35.177 --msg-mempool-size global message memory pool size in count (default: 262143) 00:21:35.177 --no-huge run without using hugepages 00:21:35.177 -i, --shm-id shared memory ID (optional) 00:21:35.177 -g, --single-file-segments force creating just one hugetlbfs file 00:21:35.177 00:21:35.177 PCI options: 00:21:35.177 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:21:35.177 -B, --pci-blocked pci addr to block (can be used more than once) 00:21:35.177 -u, --no-pci disable PCI access 00:21:35.177 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:21:35.177 00:21:35.177 Log options: 00:21:35.177 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:21:35.177 --silence-noticelog disable notice level logging to stderr 00:21:35.177 00:21:35.177 Trace options: 00:21:35.177 --num-trace-entries number of trace entries for each core, must be power of 2, 00:21:35.177 setting 0 to disable trace (default 32768) 00:21:35.177 Tracepoints vary in size and can use more than one trace entry. 00:21:35.177 -e, --tpoint-group [:] 00:21:35.177 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:21:35.177 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:21:35.177 a tracepoint group. First tpoint inside a group can be enabled by 00:21:35.177 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:21:35.177 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:21:35.177 in /include/spdk_internal/trace_defs.h 00:21:35.177 00:21:35.177 Other options: 00:21:35.177 -h, --help show this usage 00:21:35.177 -v, --version print SPDK version 00:21:35.177 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:21:35.177 --env-context Opaque context for use of the env implementation 00:21:35.177 app_ut: unrecognized option '--test-long-opt' 00:21:35.177 app_ut [options] 00:21:35.177 00:21:35.177 CPU options: 00:21:35.177 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:21:35.177 (like [0,1,10]) 00:21:35.177 --lcores lcore to CPU mapping list. The list is in the format: 00:21:35.177 [<,lcores[@CPUs]>...] 00:21:35.177 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:21:35.177 Within the group, '-' is used for range separator, 00:21:35.177 ',' is used for single number separator. 00:21:35.177 '( )' can be omitted for single element group, 00:21:35.177 '@' can be omitted if cpus and lcores have the same value 00:21:35.177 --disable-cpumask-locks Disable CPU core lock files. 00:21:35.177 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:21:35.177 pollers in the app support interrupt mode) 00:21:35.177 -p, --main-core main (primary) core for DPDK 00:21:35.177 00:21:35.177 Configuration options: 00:21:35.177 -c, --config, --json JSON config file 00:21:35.177 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:21:35.177 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:21:35.178 --wait-for-rpc wait for RPCs to initialize subsystems 00:21:35.178 --rpcs-allowed comma-separated list of permitted RPCS 00:21:35.178 --json-ignore-init-errors don't exit on invalid config entry 00:21:35.178 00:21:35.178 Memory options: 00:21:35.178 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:21:35.178 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:21:35.178 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:21:35.178 -R, --huge-unlink unlink huge files after initialization 00:21:35.178 -n, --mem-channels number of memory channels used for DPDK 00:21:35.178 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:21:35.178 --msg-mempool-size global message memory pool size in count (default: 262143) 00:21:35.178 --no-huge run without using hugepages 00:21:35.178 -i, --shm-id shared memory ID (optional) 00:21:35.178 -g, --single-file-segments force creating just one hugetlbfs file 00:21:35.178 00:21:35.178 PCI options: 00:21:35.178 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:21:35.178 -B, --pci-blocked pci addr to block (can be used more than once) 00:21:35.178 -u, --no-pci disable PCI access 00:21:35.178 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:21:35.178 00:21:35.178 Log options: 00:21:35.178 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:21:35.178 --silence-noticelog disable notice level logging to stderr 00:21:35.178 00:21:35.178 Trace options: 00:21:35.178 --num-trace-entries number of trace entries for each core, must be power of 2, 00:21:35.178 setting 0 to disable trace (default 32768) 00:21:35.178 Tracepoints vary in size and can use more than one trace entry. 00:21:35.178 -e, --tpoint-group [:] 00:21:35.178 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:21:35.178 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:21:35.178 a tracepoint group. First tpoint inside a group can be enabled by 00:21:35.178 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:21:35.178 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:21:35.178 in /include/spdk_internal/trace_defs.h 00:21:35.178 00:21:35.178 Other options: 00:21:35.178 -h, --help show this usage 00:21:35.178 -v, --version print SPDK version 00:21:35.178 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:21:35.178 --env-context Opaque context for use of the env implementation 00:21:35.178 [2024-04-18 19:14:51.008354] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1105:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:21:35.178 [2024-04-18 19:14:51.008848] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1286:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:21:35.178 app_ut [options] 00:21:35.178 00:21:35.178 CPU options: 00:21:35.178 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:21:35.178 (like [0,1,10]) 00:21:35.178 --lcores lcore to CPU mapping list. The list is in the format: 00:21:35.178 [<,lcores[@CPUs]>...] 00:21:35.178 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:21:35.178 Within the group, '-' is used for range separator, 00:21:35.178 ',' is used for single number separator. 00:21:35.178 '( )' can be omitted for single element group, 00:21:35.178 '@' can be omitted if cpus and lcores have the same value 00:21:35.178 --disable-cpumask-locks Disable CPU core lock files. 00:21:35.178 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:21:35.178 pollers in the app support interrupt mode) 00:21:35.178 -p, --main-core main (primary) core for DPDK 00:21:35.178 00:21:35.178 Configuration options: 00:21:35.178 -c, --config, --json JSON config file 00:21:35.178 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:21:35.178 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:21:35.178 --wait-for-rpc wait for RPCs to initialize subsystems 00:21:35.178 --rpcs-allowed comma-separated list of permitted RPCS 00:21:35.178 --json-ignore-init-errors don't exit on invalid config entry 00:21:35.178 00:21:35.178 Memory options: 00:21:35.178 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:21:35.178 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:21:35.178 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:21:35.178 -R, --huge-unlink unlink huge files after initialization 00:21:35.178 -n, --mem-channels number of memory channels used for DPDK 00:21:35.178 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:21:35.178 --msg-mempool-size global message memory pool size in count (default: 262143) 00:21:35.178 --no-huge run without using hugepages 00:21:35.178 -i, --shm-id shared memory ID (optional) 00:21:35.178 -g, --single-file-segments force creating just one hugetlbfs file 00:21:35.178 00:21:35.178 PCI options: 00:21:35.178 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:21:35.178 -B, --pci-blocked pci addr to block (can be used more than once) 00:21:35.178 -u, --no-pci disable PCI access 00:21:35.178 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:21:35.178 00:21:35.178 Log options: 00:21:35.178 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:21:35.178 --silence-noticelog disable notice level logging to stderr 00:21:35.178 00:21:35.178 Trace options: 00:21:35.178 --num-trace-entries number of trace entries for each core, must be power of 2, 00:21:35.178 setting 0 to disable trace (default 32768) 00:21:35.178 Tracepoints vary in size and can use more than one trace entry. 00:21:35.178 -e, --tpoint-group [:] 00:21:35.178 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:21:35.178 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:21:35.178 a tracepoint group. First tpoint inside a group can be enabled by 00:21:35.178 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:21:35.178 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:21:35.178 in /include/spdk_internal/trace_defs.h 00:21:35.178 00:21:35.178 Other options: 00:21:35.178 -h, --help show this usage 00:21:35.178 -v, --version print SPDK version 00:21:35.178 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:21:35.178 --env-context Opaque context for use of the env implementation 00:21:35.178 passed 00:21:35.178 00:21:35.178 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.178 suites 1 1 n/a 0 0 00:21:35.178 tests 1 1 1 0 0 00:21:35.178 asserts 8 8 8 0 n/a 00:21:35.178 00:21:35.178 Elapsed time = 0.003 seconds 00:21:35.178 [2024-04-18 19:14:51.010142] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1191:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:21:35.178 19:14:51 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:21:35.178 00:21:35.178 00:21:35.178 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.178 http://cunit.sourceforge.net/ 00:21:35.178 00:21:35.178 00:21:35.178 Suite: app_suite 00:21:35.178 Test: test_create_reactor ...passed 00:21:35.178 Test: test_init_reactors ...passed 00:21:35.178 Test: test_event_call ...passed 00:21:35.178 Test: test_schedule_thread ...passed 00:21:35.178 Test: test_reschedule_thread ...passed 00:21:35.178 Test: test_bind_thread ...passed 00:21:35.178 Test: test_for_each_reactor ...passed 00:21:35.178 Test: test_reactor_stats ...passed 00:21:35.178 Test: test_scheduler ...passed 00:21:35.178 Test: test_governor ...passed 00:21:35.178 00:21:35.178 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.178 suites 1 1 n/a 0 0 00:21:35.178 tests 10 10 10 0 0 00:21:35.178 asserts 344 344 344 0 n/a 00:21:35.178 00:21:35.178 Elapsed time = 0.024 seconds 00:21:35.500 00:21:35.500 real 0m0.129s 00:21:35.500 user 0m0.058s 00:21:35.500 sys 0m0.058s 00:21:35.500 19:14:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:35.500 19:14:51 -- common/autotest_common.sh@10 -- # set +x 00:21:35.500 ************************************ 00:21:35.500 END TEST unittest_event 00:21:35.500 ************************************ 00:21:35.500 19:14:51 -- unit/unittest.sh@233 -- # uname -s 00:21:35.500 19:14:51 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:21:35.500 19:14:51 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:21:35.500 19:14:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:35.500 19:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:35.500 19:14:51 -- common/autotest_common.sh@10 -- # set +x 00:21:35.500 ************************************ 00:21:35.500 START TEST unittest_ftl 00:21:35.500 ************************************ 00:21:35.500 19:14:51 -- common/autotest_common.sh@1111 -- # unittest_ftl 00:21:35.500 19:14:51 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:21:35.500 00:21:35.500 00:21:35.500 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.500 http://cunit.sourceforge.net/ 00:21:35.500 00:21:35.500 00:21:35.500 Suite: ftl_band_suite 00:21:35.500 Test: test_band_block_offset_from_addr_base ...passed 00:21:35.500 Test: test_band_block_offset_from_addr_offset ...passed 00:21:35.500 Test: test_band_addr_from_block_offset ...passed 00:21:35.500 Test: test_band_set_addr ...passed 00:21:35.500 Test: test_invalidate_addr ...passed 00:21:35.500 Test: test_next_xfer_addr ...passed 00:21:35.500 00:21:35.500 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.500 suites 1 1 n/a 0 0 00:21:35.500 tests 6 6 6 0 0 00:21:35.500 asserts 30356 30356 30356 0 n/a 00:21:35.500 00:21:35.500 Elapsed time = 0.168 seconds 00:21:35.760 19:14:51 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:21:35.760 00:21:35.760 00:21:35.760 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.760 http://cunit.sourceforge.net/ 00:21:35.760 00:21:35.760 00:21:35.760 Suite: ftl_bitmap 00:21:35.760 Test: test_ftl_bitmap_create ...[2024-04-18 19:14:51.468399] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:21:35.760 [2024-04-18 19:14:51.469455] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:21:35.760 passed 00:21:35.760 Test: test_ftl_bitmap_get ...passed 00:21:35.760 Test: test_ftl_bitmap_set ...passed 00:21:35.760 Test: test_ftl_bitmap_clear ...passed 00:21:35.760 Test: test_ftl_bitmap_find_first_set ...passed 00:21:35.760 Test: test_ftl_bitmap_find_first_clear ...passed 00:21:35.760 Test: test_ftl_bitmap_count_set ...passed 00:21:35.760 00:21:35.760 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.760 suites 1 1 n/a 0 0 00:21:35.760 tests 7 7 7 0 0 00:21:35.761 asserts 137 137 137 0 n/a 00:21:35.761 00:21:35.761 Elapsed time = 0.001 seconds 00:21:35.761 19:14:51 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:21:35.761 00:21:35.761 00:21:35.761 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.761 http://cunit.sourceforge.net/ 00:21:35.761 00:21:35.761 00:21:35.761 Suite: ftl_io_suite 00:21:35.761 Test: test_completion ...passed 00:21:35.761 Test: test_multiple_ios ...passed 00:21:35.761 00:21:35.761 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.761 suites 1 1 n/a 0 0 00:21:35.761 tests 2 2 2 0 0 00:21:35.761 asserts 47 47 47 0 n/a 00:21:35.761 00:21:35.761 Elapsed time = 0.003 seconds 00:21:35.761 19:14:51 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:21:35.761 00:21:35.761 00:21:35.761 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.761 http://cunit.sourceforge.net/ 00:21:35.761 00:21:35.761 00:21:35.761 Suite: ftl_mngt 00:21:35.761 Test: test_next_step ...passed 00:21:35.761 Test: test_continue_step ...passed 00:21:35.761 Test: test_get_func_and_step_cntx_alloc ...passed 00:21:35.761 Test: test_fail_step ...passed 00:21:35.761 Test: test_mngt_call_and_call_rollback ...passed 00:21:35.761 Test: test_nested_process_failure ...passed 00:21:35.761 00:21:35.761 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.761 suites 1 1 n/a 0 0 00:21:35.761 tests 6 6 6 0 0 00:21:35.761 asserts 176 176 176 0 n/a 00:21:35.761 00:21:35.761 Elapsed time = 0.002 seconds 00:21:35.761 19:14:51 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:21:35.761 00:21:35.761 00:21:35.761 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.761 http://cunit.sourceforge.net/ 00:21:35.761 00:21:35.761 00:21:35.761 Suite: ftl_mempool 00:21:35.761 Test: test_ftl_mempool_create ...passed 00:21:35.761 Test: test_ftl_mempool_get_put ...passed 00:21:35.761 00:21:35.761 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.761 suites 1 1 n/a 0 0 00:21:35.761 tests 2 2 2 0 0 00:21:35.761 asserts 36 36 36 0 n/a 00:21:35.761 00:21:35.761 Elapsed time = 0.000 seconds 00:21:35.761 19:14:51 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:21:35.761 00:21:35.761 00:21:35.761 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.761 http://cunit.sourceforge.net/ 00:21:35.761 00:21:35.761 00:21:35.761 Suite: ftl_addr64_suite 00:21:35.761 Test: test_addr_cached ...passed 00:21:35.761 00:21:35.761 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.761 suites 1 1 n/a 0 0 00:21:35.761 tests 1 1 1 0 0 00:21:35.761 asserts 1536 1536 1536 0 n/a 00:21:35.761 00:21:35.761 Elapsed time = 0.000 seconds 00:21:35.761 19:14:51 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:21:35.761 00:21:35.761 00:21:35.761 CUnit - A unit testing framework for C - Version 2.1-3 00:21:35.761 http://cunit.sourceforge.net/ 00:21:35.761 00:21:35.761 00:21:35.761 Suite: ftl_sb 00:21:35.761 Test: test_sb_crc_v2 ...passed 00:21:35.761 Test: test_sb_crc_v3 ...passed 00:21:35.761 Test: test_sb_v3_md_layout ...[2024-04-18 19:14:51.665499] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:21:35.761 [2024-04-18 19:14:51.666428] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:21:35.761 [2024-04-18 19:14:51.666624] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:21:35.761 [2024-04-18 19:14:51.666787] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:21:35.761 [2024-04-18 19:14:51.666939] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:21:35.761 [2024-04-18 19:14:51.667175] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:21:35.761 [2024-04-18 19:14:51.667327] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:21:35.761 [2024-04-18 19:14:51.667531] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:21:35.761 [2024-04-18 19:14:51.667744] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:21:35.761 [2024-04-18 19:14:51.667918] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:21:35.761 [2024-04-18 19:14:51.668064] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:21:35.761 passed 00:21:35.761 Test: test_sb_v5_md_layout ...passed 00:21:35.761 00:21:35.761 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.761 suites 1 1 n/a 0 0 00:21:35.761 tests 4 4 4 0 0 00:21:35.761 asserts 148 148 148 0 n/a 00:21:35.761 00:21:35.761 Elapsed time = 0.003 seconds 00:21:35.761 19:14:51 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:21:36.022 00:21:36.022 00:21:36.022 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.022 http://cunit.sourceforge.net/ 00:21:36.022 00:21:36.022 00:21:36.022 Suite: ftl_layout_upgrade 00:21:36.022 Test: test_l2p_upgrade ...passed 00:21:36.022 00:21:36.022 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.022 suites 1 1 n/a 0 0 00:21:36.022 tests 1 1 1 0 0 00:21:36.022 asserts 140 140 140 0 n/a 00:21:36.022 00:21:36.022 Elapsed time = 0.001 seconds 00:21:36.022 00:21:36.022 real 0m0.526s 00:21:36.022 user 0m0.250s 00:21:36.022 sys 0m0.275s 00:21:36.022 19:14:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.022 19:14:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.022 ************************************ 00:21:36.022 END TEST unittest_ftl 00:21:36.022 ************************************ 00:21:36.022 19:14:51 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:21:36.022 19:14:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:36.022 19:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.022 19:14:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.022 ************************************ 00:21:36.022 START TEST unittest_accel 00:21:36.022 ************************************ 00:21:36.022 19:14:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:21:36.022 00:21:36.022 00:21:36.022 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.022 http://cunit.sourceforge.net/ 00:21:36.022 00:21:36.022 00:21:36.022 Suite: accel_sequence 00:21:36.022 Test: test_sequence_fill_copy ...passed 00:21:36.022 Test: test_sequence_abort ...passed 00:21:36.022 Test: test_sequence_append_error ...passed 00:21:36.022 Test: test_sequence_completion_error ...[2024-04-18 19:14:51.829177] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1934:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f57970757c0 00:21:36.022 [2024-04-18 19:14:51.829598] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1934:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f57970757c0 00:21:36.022 [2024-04-18 19:14:51.829657] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1844:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f57970757c0 00:21:36.022 [2024-04-18 19:14:51.829711] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1844:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f57970757c0 00:21:36.022 passed 00:21:36.022 Test: test_sequence_decompress ...passed 00:21:36.022 Test: test_sequence_reverse ...passed 00:21:36.022 Test: test_sequence_copy_elision ...passed 00:21:36.022 Test: test_sequence_accel_buffers ...passed 00:21:36.022 Test: test_sequence_memory_domain ...[2024-04-18 19:14:51.843125] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1736:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:21:36.022 [2024-04-18 19:14:51.843378] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1775:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:21:36.022 passed 00:21:36.022 Test: test_sequence_module_memory_domain ...passed 00:21:36.022 Test: test_sequence_crypto ...passed 00:21:36.022 Test: test_sequence_driver ...[2024-04-18 19:14:51.851673] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1883:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f579642d7c0 using driver: ut 00:21:36.022 [2024-04-18 19:14:51.851844] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1947:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f579642d7c0 through driver: ut 00:21:36.022 passed 00:21:36.022 Test: test_sequence_same_iovs ...passed 00:21:36.022 Test: test_sequence_crc32 ...passed 00:21:36.022 Suite: accel 00:21:36.022 Test: test_spdk_accel_task_complete ...passed 00:21:36.022 Test: test_get_task ...passed 00:21:36.022 Test: test_spdk_accel_submit_copy ...passed 00:21:36.022 Test: test_spdk_accel_submit_dualcast ...[2024-04-18 19:14:51.858118] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:21:36.022 [2024-04-18 19:14:51.858192] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:21:36.022 passed 00:21:36.022 Test: test_spdk_accel_submit_compare ...passed 00:21:36.022 Test: test_spdk_accel_submit_fill ...passed 00:21:36.022 Test: test_spdk_accel_submit_crc32c ...passed 00:21:36.022 Test: test_spdk_accel_submit_crc32cv ...passed 00:21:36.022 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:21:36.022 Test: test_spdk_accel_submit_xor ...passed 00:21:36.022 Test: test_spdk_accel_module_find_by_name ...passed 00:21:36.022 Test: test_spdk_accel_module_register ...passed 00:21:36.022 00:21:36.022 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.022 suites 2 2 n/a 0 0 00:21:36.022 tests 26 26 26 0 0 00:21:36.022 asserts 831 831 831 0 n/a 00:21:36.022 00:21:36.022 Elapsed time = 0.042 seconds 00:21:36.022 00:21:36.022 real 0m0.089s 00:21:36.022 user 0m0.053s 00:21:36.022 sys 0m0.037s 00:21:36.022 19:14:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.022 19:14:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.022 ************************************ 00:21:36.022 END TEST unittest_accel 00:21:36.022 ************************************ 00:21:36.022 19:14:51 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:21:36.022 19:14:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:36.022 19:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.022 19:14:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.283 ************************************ 00:21:36.283 START TEST unittest_ioat 00:21:36.283 ************************************ 00:21:36.283 19:14:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:21:36.283 00:21:36.283 00:21:36.283 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.283 http://cunit.sourceforge.net/ 00:21:36.283 00:21:36.283 00:21:36.283 Suite: ioat 00:21:36.283 Test: ioat_state_check ...passed 00:21:36.283 00:21:36.283 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.283 suites 1 1 n/a 0 0 00:21:36.283 tests 1 1 1 0 0 00:21:36.283 asserts 32 32 32 0 n/a 00:21:36.283 00:21:36.283 Elapsed time = 0.000 seconds 00:21:36.283 00:21:36.283 real 0m0.032s 00:21:36.283 user 0m0.022s 00:21:36.283 sys 0m0.011s 00:21:36.283 19:14:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.283 ************************************ 00:21:36.283 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:36.283 END TEST unittest_ioat 00:21:36.283 ************************************ 00:21:36.283 19:14:52 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:21:36.283 19:14:52 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:21:36.283 19:14:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:36.283 19:14:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.283 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:36.283 ************************************ 00:21:36.283 START TEST unittest_idxd_user 00:21:36.283 ************************************ 00:21:36.283 19:14:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:21:36.283 00:21:36.283 00:21:36.283 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.283 http://cunit.sourceforge.net/ 00:21:36.283 00:21:36.283 00:21:36.283 Suite: idxd_user 00:21:36.283 Test: test_idxd_wait_cmd ...[2024-04-18 19:14:52.129209] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:21:36.283 [2024-04-18 19:14:52.130026] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:21:36.283 passed 00:21:36.283 Test: test_idxd_reset_dev ...[2024-04-18 19:14:52.130768] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:21:36.283 [2024-04-18 19:14:52.131021] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:21:36.283 passed 00:21:36.283 Test: test_idxd_group_config ...passed 00:21:36.283 Test: test_idxd_wq_config ...passed 00:21:36.283 00:21:36.283 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.283 suites 1 1 n/a 0 0 00:21:36.283 tests 4 4 4 0 0 00:21:36.283 asserts 20 20 20 0 n/a 00:21:36.283 00:21:36.283 Elapsed time = 0.002 seconds 00:21:36.283 00:21:36.283 real 0m0.051s 00:21:36.283 user 0m0.027s 00:21:36.283 sys 0m0.022s 00:21:36.283 19:14:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.283 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:36.283 ************************************ 00:21:36.283 END TEST unittest_idxd_user 00:21:36.283 ************************************ 00:21:36.283 19:14:52 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:21:36.283 19:14:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:36.283 19:14:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.283 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:36.543 ************************************ 00:21:36.543 START TEST unittest_iscsi 00:21:36.543 ************************************ 00:21:36.543 19:14:52 -- common/autotest_common.sh@1111 -- # unittest_iscsi 00:21:36.543 19:14:52 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:21:36.543 00:21:36.543 00:21:36.543 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.543 http://cunit.sourceforge.net/ 00:21:36.543 00:21:36.543 00:21:36.543 Suite: conn_suite 00:21:36.543 Test: read_task_split_in_order_case ...passed 00:21:36.543 Test: read_task_split_reverse_order_case ...passed 00:21:36.543 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:21:36.543 Test: process_non_read_task_completion_test ...passed 00:21:36.543 Test: free_tasks_on_connection ...passed 00:21:36.543 Test: free_tasks_with_queued_datain ...passed 00:21:36.543 Test: abort_queued_datain_task_test ...passed 00:21:36.543 Test: abort_queued_datain_tasks_test ...passed 00:21:36.543 00:21:36.543 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.543 suites 1 1 n/a 0 0 00:21:36.543 tests 8 8 8 0 0 00:21:36.543 asserts 230 230 230 0 n/a 00:21:36.543 00:21:36.543 Elapsed time = 0.000 seconds 00:21:36.543 19:14:52 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:21:36.543 00:21:36.543 00:21:36.543 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.543 http://cunit.sourceforge.net/ 00:21:36.543 00:21:36.543 00:21:36.543 Suite: iscsi_suite 00:21:36.543 Test: param_negotiation_test ...passed 00:21:36.543 Test: list_negotiation_test ...passed 00:21:36.543 Test: parse_valid_test ...passed 00:21:36.543 Test: parse_invalid_test ...[2024-04-18 19:14:52.314416] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:21:36.543 [2024-04-18 19:14:52.314748] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:21:36.543 [2024-04-18 19:14:52.314781] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:21:36.543 [2024-04-18 19:14:52.314831] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:21:36.543 [2024-04-18 19:14:52.314955] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:21:36.543 passed 00:21:36.543 00:21:36.543 [2024-04-18 19:14:52.315010] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:21:36.543 [2024-04-18 19:14:52.315127] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:21:36.543 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.543 suites 1 1 n/a 0 0 00:21:36.543 tests 4 4 4 0 0 00:21:36.543 asserts 161 161 161 0 n/a 00:21:36.543 00:21:36.543 Elapsed time = 0.005 seconds 00:21:36.543 19:14:52 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:21:36.543 00:21:36.543 00:21:36.543 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.543 http://cunit.sourceforge.net/ 00:21:36.543 00:21:36.543 00:21:36.543 Suite: iscsi_target_node_suite 00:21:36.543 Test: add_lun_test_cases ...[2024-04-18 19:14:52.355769] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:21:36.543 [2024-04-18 19:14:52.356307] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:21:36.543 [2024-04-18 19:14:52.356530] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:21:36.543 [2024-04-18 19:14:52.356669] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:21:36.543 [2024-04-18 19:14:52.356828] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:21:36.543 passed 00:21:36.543 Test: allow_any_allowed ...passed 00:21:36.543 Test: allow_ipv6_allowed ...passed 00:21:36.543 Test: allow_ipv6_denied ...passed 00:21:36.543 Test: allow_ipv6_invalid ...passed 00:21:36.543 Test: allow_ipv4_allowed ...passed 00:21:36.543 Test: allow_ipv4_denied ...passed 00:21:36.543 Test: allow_ipv4_invalid ...passed 00:21:36.543 Test: node_access_allowed ...passed 00:21:36.543 Test: node_access_denied_by_empty_netmask ...passed 00:21:36.543 Test: node_access_multi_initiator_groups_cases ...passed 00:21:36.543 Test: allow_iscsi_name_multi_maps_case ...passed 00:21:36.543 Test: chap_param_test_cases ...[2024-04-18 19:14:52.359057] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:21:36.543 [2024-04-18 19:14:52.359220] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:21:36.543 [2024-04-18 19:14:52.359431] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:21:36.543 [2024-04-18 19:14:52.359580] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:21:36.543 [2024-04-18 19:14:52.359745] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:21:36.543 passed 00:21:36.543 00:21:36.543 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.543 suites 1 1 n/a 0 0 00:21:36.543 tests 13 13 13 0 0 00:21:36.543 asserts 50 50 50 0 n/a 00:21:36.543 00:21:36.543 Elapsed time = 0.002 seconds 00:21:36.543 19:14:52 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:21:36.544 00:21:36.544 00:21:36.544 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.544 http://cunit.sourceforge.net/ 00:21:36.544 00:21:36.544 00:21:36.544 Suite: iscsi_suite 00:21:36.544 Test: op_login_check_target_test ...[2024-04-18 19:14:52.406765] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:21:36.544 passed 00:21:36.544 Test: op_login_session_normal_test ...[2024-04-18 19:14:52.407193] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:21:36.544 [2024-04-18 19:14:52.407233] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:21:36.544 [2024-04-18 19:14:52.407265] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:21:36.544 [2024-04-18 19:14:52.407313] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:21:36.544 [2024-04-18 19:14:52.407519] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:21:36.544 passed 00:21:36.544 Test: maxburstlength_test ...[2024-04-18 19:14:52.407631] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:21:36.544 [2024-04-18 19:14:52.407702] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:21:36.544 [2024-04-18 19:14:52.407944] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:21:36.544 passed 00:21:36.544 Test: underflow_for_read_transfer_test ...[2024-04-18 19:14:52.407995] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:21:36.544 passed 00:21:36.544 Test: underflow_for_zero_read_transfer_test ...passed 00:21:36.544 Test: underflow_for_request_sense_test ...passed 00:21:36.544 Test: underflow_for_check_condition_test ...passed 00:21:36.544 Test: add_transfer_task_test ...passed 00:21:36.544 Test: get_transfer_task_test ...passed 00:21:36.544 Test: del_transfer_task_test ...passed 00:21:36.544 Test: clear_all_transfer_tasks_test ...passed 00:21:36.544 Test: build_iovs_test ...passed 00:21:36.544 Test: build_iovs_with_md_test ...passed 00:21:36.544 Test: pdu_hdr_op_login_test ...[2024-04-18 19:14:52.409598] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:21:36.544 [2024-04-18 19:14:52.409722] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:21:36.544 [2024-04-18 19:14:52.409827] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:21:36.544 passed 00:21:36.544 Test: pdu_hdr_op_text_test ...[2024-04-18 19:14:52.409950] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:21:36.544 [2024-04-18 19:14:52.410034] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:21:36.544 [2024-04-18 19:14:52.410074] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:21:36.544 passed 00:21:36.544 Test: pdu_hdr_op_logout_test ...[2024-04-18 19:14:52.410156] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:21:36.544 passed 00:21:36.544 Test: pdu_hdr_op_scsi_test ...[2024-04-18 19:14:52.410323] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:21:36.544 [2024-04-18 19:14:52.410350] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:21:36.544 [2024-04-18 19:14:52.410396] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:21:36.544 [2024-04-18 19:14:52.410478] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:21:36.544 [2024-04-18 19:14:52.410582] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:21:36.544 [2024-04-18 19:14:52.410772] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:21:36.544 passed 00:21:36.544 Test: pdu_hdr_op_task_mgmt_test ...[2024-04-18 19:14:52.410893] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:21:36.544 [2024-04-18 19:14:52.410966] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:21:36.544 passed 00:21:36.544 Test: pdu_hdr_op_nopout_test ...[2024-04-18 19:14:52.411183] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:21:36.544 [2024-04-18 19:14:52.411333] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:21:36.544 [2024-04-18 19:14:52.411382] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:21:36.544 passed 00:21:36.544 Test: pdu_hdr_op_data_test ...[2024-04-18 19:14:52.411440] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:21:36.544 [2024-04-18 19:14:52.411479] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:21:36.544 [2024-04-18 19:14:52.411752] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:21:36.544 [2024-04-18 19:14:52.411806] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:21:36.544 [2024-04-18 19:14:52.411857] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:21:36.544 [2024-04-18 19:14:52.411923] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:21:36.544 [2024-04-18 19:14:52.412007] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:21:36.544 [2024-04-18 19:14:52.412035] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:21:36.544 passed 00:21:36.544 Test: empty_text_with_cbit_test ...passed 00:21:36.544 Test: pdu_payload_read_test ...[2024-04-18 19:14:52.414286] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:21:36.544 passed 00:21:36.544 Test: data_out_pdu_sequence_test ...passed 00:21:36.544 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:21:36.544 00:21:36.544 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.544 suites 1 1 n/a 0 0 00:21:36.544 tests 24 24 24 0 0 00:21:36.544 asserts 150253 150253 150253 0 n/a 00:21:36.544 00:21:36.544 Elapsed time = 0.017 seconds 00:21:36.544 19:14:52 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:21:36.544 00:21:36.544 00:21:36.544 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.544 http://cunit.sourceforge.net/ 00:21:36.544 00:21:36.544 00:21:36.544 Suite: init_grp_suite 00:21:36.544 Test: create_initiator_group_success_case ...passed 00:21:36.544 Test: find_initiator_group_success_case ...passed 00:21:36.544 Test: register_initiator_group_twice_case ...passed 00:21:36.544 Test: add_initiator_name_success_case ...passed 00:21:36.544 Test: add_initiator_name_fail_case ...[2024-04-18 19:14:52.463798] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:21:36.544 passed 00:21:36.544 Test: delete_all_initiator_names_success_case ...passed 00:21:36.544 Test: add_netmask_success_case ...passed 00:21:36.544 Test: add_netmask_fail_case ...passed 00:21:36.544 Test: delete_all_netmasks_success_case ...[2024-04-18 19:14:52.464297] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:21:36.544 passed 00:21:36.544 Test: initiator_name_overwrite_all_to_any_case ...passed 00:21:36.544 Test: netmask_overwrite_all_to_any_case ...passed 00:21:36.544 Test: add_delete_initiator_names_case ...passed 00:21:36.544 Test: add_duplicated_initiator_names_case ...passed 00:21:36.544 Test: delete_nonexisting_initiator_names_case ...passed 00:21:36.544 Test: add_delete_netmasks_case ...passed 00:21:36.544 Test: add_duplicated_netmasks_case ...passed 00:21:36.544 Test: delete_nonexisting_netmasks_case ...passed 00:21:36.544 00:21:36.545 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.545 suites 1 1 n/a 0 0 00:21:36.545 tests 17 17 17 0 0 00:21:36.545 asserts 108 108 108 0 n/a 00:21:36.545 00:21:36.545 Elapsed time = 0.001 seconds 00:21:36.804 19:14:52 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:21:36.805 00:21:36.805 00:21:36.805 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.805 http://cunit.sourceforge.net/ 00:21:36.805 00:21:36.805 00:21:36.805 Suite: portal_grp_suite 00:21:36.805 Test: portal_create_ipv4_normal_case ...passed 00:21:36.805 Test: portal_create_ipv6_normal_case ...passed 00:21:36.805 Test: portal_create_ipv4_wildcard_case ...passed 00:21:36.805 Test: portal_create_ipv6_wildcard_case ...passed 00:21:36.805 Test: portal_create_twice_case ...[2024-04-18 19:14:52.501222] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:21:36.805 passed 00:21:36.805 Test: portal_grp_register_unregister_case ...passed 00:21:36.805 Test: portal_grp_register_twice_case ...passed 00:21:36.805 Test: portal_grp_add_delete_case ...passed 00:21:36.805 Test: portal_grp_add_delete_twice_case ...passed 00:21:36.805 00:21:36.805 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.805 suites 1 1 n/a 0 0 00:21:36.805 tests 9 9 9 0 0 00:21:36.805 asserts 44 44 44 0 n/a 00:21:36.805 00:21:36.805 Elapsed time = 0.004 seconds 00:21:36.805 00:21:36.805 real 0m0.283s 00:21:36.805 user 0m0.148s 00:21:36.805 sys 0m0.136s 00:21:36.805 19:14:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.805 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:36.805 ************************************ 00:21:36.805 END TEST unittest_iscsi 00:21:36.805 ************************************ 00:21:36.805 19:14:52 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:21:36.805 19:14:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:36.805 19:14:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.805 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:36.805 ************************************ 00:21:36.805 START TEST unittest_json 00:21:36.805 ************************************ 00:21:36.805 19:14:52 -- common/autotest_common.sh@1111 -- # unittest_json 00:21:36.805 19:14:52 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:21:36.805 00:21:36.805 00:21:36.805 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.805 http://cunit.sourceforge.net/ 00:21:36.805 00:21:36.805 00:21:36.805 Suite: json 00:21:36.805 Test: test_parse_literal ...passed 00:21:36.805 Test: test_parse_string_simple ...passed 00:21:36.805 Test: test_parse_string_control_chars ...passed 00:21:36.805 Test: test_parse_string_utf8 ...passed 00:21:36.805 Test: test_parse_string_escapes_twochar ...passed 00:21:36.805 Test: test_parse_string_escapes_unicode ...passed 00:21:36.805 Test: test_parse_number ...passed 00:21:36.805 Test: test_parse_array ...passed 00:21:36.805 Test: test_parse_object ...passed 00:21:36.805 Test: test_parse_nesting ...passed 00:21:36.805 Test: test_parse_comment ...passed 00:21:36.805 00:21:36.805 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.805 suites 1 1 n/a 0 0 00:21:36.805 tests 11 11 11 0 0 00:21:36.805 asserts 1516 1516 1516 0 n/a 00:21:36.805 00:21:36.805 Elapsed time = 0.001 seconds 00:21:36.805 19:14:52 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:21:36.805 00:21:36.805 00:21:36.805 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.805 http://cunit.sourceforge.net/ 00:21:36.805 00:21:36.805 00:21:36.805 Suite: json 00:21:36.805 Test: test_strequal ...passed 00:21:36.805 Test: test_num_to_uint16 ...passed 00:21:36.805 Test: test_num_to_int32 ...passed 00:21:36.805 Test: test_num_to_uint64 ...passed 00:21:36.805 Test: test_decode_object ...passed 00:21:36.805 Test: test_decode_array ...passed 00:21:36.805 Test: test_decode_bool ...passed 00:21:36.805 Test: test_decode_uint16 ...passed 00:21:36.805 Test: test_decode_int32 ...passed 00:21:36.805 Test: test_decode_uint32 ...passed 00:21:36.805 Test: test_decode_uint64 ...passed 00:21:36.805 Test: test_decode_string ...passed 00:21:36.805 Test: test_decode_uuid ...passed 00:21:36.805 Test: test_find ...passed 00:21:36.805 Test: test_find_array ...passed 00:21:36.805 Test: test_iterating ...passed 00:21:36.805 Test: test_free_object ...passed 00:21:36.805 00:21:36.805 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.805 suites 1 1 n/a 0 0 00:21:36.805 tests 17 17 17 0 0 00:21:36.805 asserts 236 236 236 0 n/a 00:21:36.805 00:21:36.805 Elapsed time = 0.001 seconds 00:21:36.805 19:14:52 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:21:36.805 00:21:36.805 00:21:36.805 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.805 http://cunit.sourceforge.net/ 00:21:36.805 00:21:36.805 00:21:36.805 Suite: json 00:21:36.805 Test: test_write_literal ...passed 00:21:36.805 Test: test_write_string_simple ...passed 00:21:36.805 Test: test_write_string_escapes ...passed 00:21:36.805 Test: test_write_string_utf16le ...passed 00:21:36.805 Test: test_write_number_int32 ...passed 00:21:36.805 Test: test_write_number_uint32 ...passed 00:21:36.805 Test: test_write_number_uint128 ...passed 00:21:36.805 Test: test_write_string_number_uint128 ...passed 00:21:36.805 Test: test_write_number_int64 ...passed 00:21:36.805 Test: test_write_number_uint64 ...passed 00:21:36.805 Test: test_write_number_double ...passed 00:21:36.805 Test: test_write_uuid ...passed 00:21:36.805 Test: test_write_array ...passed 00:21:36.805 Test: test_write_object ...passed 00:21:36.805 Test: test_write_nesting ...passed 00:21:36.805 Test: test_write_val ...passed 00:21:36.805 00:21:36.805 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.805 suites 1 1 n/a 0 0 00:21:36.805 tests 16 16 16 0 0 00:21:36.805 asserts 918 918 918 0 n/a 00:21:36.805 00:21:36.805 Elapsed time = 0.005 seconds 00:21:37.064 19:14:52 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:21:37.064 00:21:37.064 00:21:37.064 CUnit - A unit testing framework for C - Version 2.1-3 00:21:37.064 http://cunit.sourceforge.net/ 00:21:37.064 00:21:37.064 00:21:37.064 Suite: jsonrpc 00:21:37.064 Test: test_parse_request ...passed 00:21:37.064 Test: test_parse_request_streaming ...passed 00:21:37.064 00:21:37.064 Run Summary: Type Total Ran Passed Failed Inactive 00:21:37.064 suites 1 1 n/a 0 0 00:21:37.064 tests 2 2 2 0 0 00:21:37.064 asserts 289 289 289 0 n/a 00:21:37.064 00:21:37.064 Elapsed time = 0.005 seconds 00:21:37.064 00:21:37.064 real 0m0.164s 00:21:37.064 user 0m0.101s 00:21:37.064 sys 0m0.063s 00:21:37.064 19:14:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:37.064 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:37.064 ************************************ 00:21:37.064 END TEST unittest_json 00:21:37.064 ************************************ 00:21:37.064 19:14:52 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:21:37.064 19:14:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:37.064 19:14:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:37.064 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:37.064 ************************************ 00:21:37.064 START TEST unittest_rpc 00:21:37.064 ************************************ 00:21:37.064 19:14:52 -- common/autotest_common.sh@1111 -- # unittest_rpc 00:21:37.064 19:14:52 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:21:37.064 00:21:37.064 00:21:37.064 CUnit - A unit testing framework for C - Version 2.1-3 00:21:37.064 http://cunit.sourceforge.net/ 00:21:37.064 00:21:37.064 00:21:37.064 Suite: rpc 00:21:37.064 Test: test_jsonrpc_handler ...passed 00:21:37.064 Test: test_spdk_rpc_is_method_allowed ...passed 00:21:37.065 Test: test_rpc_get_methods ...[2024-04-18 19:14:52.879339] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:21:37.065 passed 00:21:37.065 Test: test_rpc_spdk_get_version ...passed 00:21:37.065 Test: test_spdk_rpc_listen_close ...passed 00:21:37.065 Test: test_rpc_run_multiple_servers ...passed 00:21:37.065 00:21:37.065 Run Summary: Type Total Ran Passed Failed Inactive 00:21:37.065 suites 1 1 n/a 0 0 00:21:37.065 tests 6 6 6 0 0 00:21:37.065 asserts 23 23 23 0 n/a 00:21:37.065 00:21:37.065 Elapsed time = 0.001 seconds 00:21:37.065 00:21:37.065 real 0m0.037s 00:21:37.065 user 0m0.024s 00:21:37.065 sys 0m0.013s 00:21:37.065 19:14:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:37.065 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:37.065 ************************************ 00:21:37.065 END TEST unittest_rpc 00:21:37.065 ************************************ 00:21:37.065 19:14:52 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:21:37.065 19:14:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:37.065 19:14:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:37.065 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:21:37.065 ************************************ 00:21:37.065 START TEST unittest_notify 00:21:37.065 ************************************ 00:21:37.065 19:14:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:21:37.324 00:21:37.324 00:21:37.324 CUnit - A unit testing framework for C - Version 2.1-3 00:21:37.324 http://cunit.sourceforge.net/ 00:21:37.324 00:21:37.324 00:21:37.324 Suite: app_suite 00:21:37.324 Test: notify ...passed 00:21:37.324 00:21:37.324 Run Summary: Type Total Ran Passed Failed Inactive 00:21:37.324 suites 1 1 n/a 0 0 00:21:37.324 tests 1 1 1 0 0 00:21:37.324 asserts 13 13 13 0 n/a 00:21:37.324 00:21:37.324 Elapsed time = 0.000 seconds 00:21:37.324 00:21:37.324 real 0m0.038s 00:21:37.324 user 0m0.021s 00:21:37.324 sys 0m0.017s 00:21:37.324 19:14:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:37.324 19:14:53 -- common/autotest_common.sh@10 -- # set +x 00:21:37.324 ************************************ 00:21:37.324 END TEST unittest_notify 00:21:37.324 ************************************ 00:21:37.324 19:14:53 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:21:37.324 19:14:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:37.324 19:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:37.324 19:14:53 -- common/autotest_common.sh@10 -- # set +x 00:21:37.324 ************************************ 00:21:37.324 START TEST unittest_nvme 00:21:37.324 ************************************ 00:21:37.324 19:14:53 -- common/autotest_common.sh@1111 -- # unittest_nvme 00:21:37.324 19:14:53 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:21:37.324 00:21:37.324 00:21:37.324 CUnit - A unit testing framework for C - Version 2.1-3 00:21:37.324 http://cunit.sourceforge.net/ 00:21:37.324 00:21:37.324 00:21:37.324 Suite: nvme 00:21:37.324 Test: test_opc_data_transfer ...passed 00:21:37.324 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:21:37.324 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:21:37.324 Test: test_trid_parse_and_compare ...[2024-04-18 19:14:53.133724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1172:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:21:37.324 [2024-04-18 19:14:53.134113] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1229:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:21:37.324 [2024-04-18 19:14:53.134220] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1184:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:21:37.325 [2024-04-18 19:14:53.134268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1229:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:21:37.325 [2024-04-18 19:14:53.134304] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1195:parse_next_key: *ERROR*: Key without value 00:21:37.325 [2024-04-18 19:14:53.134398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1229:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:21:37.325 passed 00:21:37.325 Test: test_trid_trtype_str ...passed 00:21:37.325 Test: test_trid_adrfam_str ...passed 00:21:37.325 Test: test_nvme_ctrlr_probe ...[2024-04-18 19:14:53.134639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:21:37.325 passed 00:21:37.325 Test: test_spdk_nvme_probe ...[2024-04-18 19:14:53.134752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:21:37.325 [2024-04-18 19:14:53.134781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:21:37.325 [2024-04-18 19:14:53.134891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:21:37.325 passed 00:21:37.325 Test: test_spdk_nvme_connect ...[2024-04-18 19:14:53.134930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:21:37.325 [2024-04-18 19:14:53.135039] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:21:37.325 [2024-04-18 19:14:53.135520] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:21:37.325 passed 00:21:37.325 Test: test_nvme_ctrlr_probe_internal ...[2024-04-18 19:14:53.135613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:21:37.325 [2024-04-18 19:14:53.135780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:21:37.325 [2024-04-18 19:14:53.135829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:37.325 passed 00:21:37.325 Test: test_nvme_init_controllers ...passed 00:21:37.325 Test: test_nvme_driver_init ...[2024-04-18 19:14:53.135923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:21:37.325 [2024-04-18 19:14:53.136059] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:21:37.325 [2024-04-18 19:14:53.136101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:21:37.325 [2024-04-18 19:14:53.245411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:21:37.325 [2024-04-18 19:14:53.245618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:21:37.325 passed 00:21:37.325 Test: test_spdk_nvme_detach ...passed 00:21:37.325 Test: test_nvme_completion_poll_cb ...passed 00:21:37.325 Test: test_nvme_user_copy_cmd_complete ...passed 00:21:37.325 Test: test_nvme_allocate_request_null ...passed 00:21:37.325 Test: test_nvme_allocate_request ...passed 00:21:37.325 Test: test_nvme_free_request ...passed 00:21:37.325 Test: test_nvme_allocate_request_user_copy ...passed 00:21:37.325 Test: test_nvme_robust_mutex_init_shared ...passed 00:21:37.325 Test: test_nvme_request_check_timeout ...passed 00:21:37.325 Test: test_nvme_wait_for_completion ...passed 00:21:37.325 Test: test_spdk_nvme_parse_func ...passed 00:21:37.325 Test: test_spdk_nvme_detach_async ...passed 00:21:37.325 Test: test_nvme_parse_addr ...[2024-04-18 19:14:53.246589] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1582:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:21:37.325 passed 00:21:37.325 00:21:37.325 Run Summary: Type Total Ran Passed Failed Inactive 00:21:37.325 suites 1 1 n/a 0 0 00:21:37.325 tests 25 25 25 0 0 00:21:37.325 asserts 326 326 326 0 n/a 00:21:37.325 00:21:37.325 Elapsed time = 0.006 seconds 00:21:37.584 19:14:53 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:21:37.584 00:21:37.584 00:21:37.584 CUnit - A unit testing framework for C - Version 2.1-3 00:21:37.584 http://cunit.sourceforge.net/ 00:21:37.584 00:21:37.584 00:21:37.584 Suite: nvme_ctrlr 00:21:37.584 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-04-18 19:14:53.292660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 passed 00:21:37.584 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-04-18 19:14:53.294674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 passed 00:21:37.584 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-04-18 19:14:53.296019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 passed 00:21:37.584 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-04-18 19:14:53.297311] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 passed 00:21:37.584 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-04-18 19:14:53.298627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 [2024-04-18 19:14:53.299834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-18 19:14:53.301089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-18 19:14:53.302293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:21:37.584 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-04-18 19:14:53.304680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 [2024-04-18 19:14:53.306982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-18 19:14:53.308191] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:21:37.584 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-04-18 19:14:53.310615] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 [2024-04-18 19:14:53.311805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-18 19:14:53.314117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:21:37.584 Test: test_nvme_ctrlr_init_delay ...[2024-04-18 19:14:53.316576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 passed 00:21:37.584 Test: test_alloc_io_qpair_rr_1 ...[2024-04-18 19:14:53.317899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 [2024-04-18 19:14:53.318117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:21:37.584 [2024-04-18 19:14:53.318325] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:21:37.584 [2024-04-18 19:14:53.318400] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:21:37.584 [2024-04-18 19:14:53.318440] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:21:37.584 passed 00:21:37.584 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:21:37.584 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:21:37.584 Test: test_alloc_io_qpair_wrr_1 ...[2024-04-18 19:14:53.318583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 passed 00:21:37.584 Test: test_alloc_io_qpair_wrr_2 ...[2024-04-18 19:14:53.318799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.584 [2024-04-18 19:14:53.318942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:21:37.584 passed 00:21:37.584 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-04-18 19:14:53.319254] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4858:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:21:37.584 [2024-04-18 19:14:53.319453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:21:37.584 [2024-04-18 19:14:53.319573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4935:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:21:37.584 [2024-04-18 19:14:53.319657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:21:37.584 passed 00:21:37.584 Test: test_nvme_ctrlr_fail ...passed 00:21:37.584 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...[2024-04-18 19:14:53.319759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:21:37.584 passed 00:21:37.584 Test: test_nvme_ctrlr_set_supported_features ...passed 00:21:37.584 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:21:37.584 Test: test_nvme_ctrlr_test_active_ns ...[2024-04-18 19:14:53.320083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:21:37.843 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:21:37.843 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:21:37.843 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-04-18 19:14:53.671521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-04-18 19:14:53.678521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-04-18 19:14:53.679733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 [2024-04-18 19:14:53.679796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2883:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:21:37.843 passed 00:21:37.843 Test: test_alloc_io_qpair_fail ...[2024-04-18 19:14:53.680922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_add_remove_process ...passed 00:21:37.843 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:21:37.843 Test: test_nvme_ctrlr_set_state ...[2024-04-18 19:14:53.681022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:21:37.843 [2024-04-18 19:14:53.681169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-04-18 19:14:53.681209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-04-18 19:14:53.704813] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_ns_mgmt ...[2024-04-18 19:14:53.752937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_reset ...[2024-04-18 19:14:53.754533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_aer_callback ...[2024-04-18 19:14:53.754928] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-04-18 19:14:53.756359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:21:37.843 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:21:37.843 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-04-18 19:14:53.758197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:21:37.843 Test: test_nvme_ctrlr_ana_resize ...[2024-04-18 19:14:53.759556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:21:37.843 Test: test_nvme_transport_ctrlr_ready ...passed 00:21:37.843 Test: test_nvme_ctrlr_disable ...[2024-04-18 19:14:53.761180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4029:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:21:37.843 [2024-04-18 19:14:53.761225] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4080:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:21:37.843 [2024-04-18 19:14:53.761267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:21:37.843 passed 00:21:37.843 00:21:37.843 Run Summary: Type Total Ran Passed Failed Inactive 00:21:37.843 suites 1 1 n/a 0 0 00:21:37.843 tests 43 43 43 0 0 00:21:37.843 asserts 10418 10418 10418 0 n/a 00:21:37.843 00:21:37.843 Elapsed time = 0.430 seconds 00:21:38.142 19:14:53 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:21:38.142 00:21:38.142 00:21:38.142 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.142 http://cunit.sourceforge.net/ 00:21:38.142 00:21:38.142 00:21:38.142 Suite: nvme_ctrlr_cmd 00:21:38.142 Test: test_get_log_pages ...passed 00:21:38.142 Test: test_set_feature_cmd ...passed 00:21:38.142 Test: test_set_feature_ns_cmd ...passed 00:21:38.142 Test: test_get_feature_cmd ...passed 00:21:38.142 Test: test_get_feature_ns_cmd ...passed 00:21:38.142 Test: test_abort_cmd ...passed 00:21:38.142 Test: test_set_host_id_cmds ...[2024-04-18 19:14:53.820935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:21:38.142 passed 00:21:38.142 Test: test_io_cmd_raw_no_payload_build ...passed 00:21:38.142 Test: test_io_raw_cmd ...passed 00:21:38.142 Test: test_io_raw_cmd_with_md ...passed 00:21:38.142 Test: test_namespace_attach ...passed 00:21:38.142 Test: test_namespace_detach ...passed 00:21:38.142 Test: test_namespace_create ...passed 00:21:38.142 Test: test_namespace_delete ...passed 00:21:38.142 Test: test_doorbell_buffer_config ...passed 00:21:38.142 Test: test_format_nvme ...passed 00:21:38.142 Test: test_fw_commit ...passed 00:21:38.142 Test: test_fw_image_download ...passed 00:21:38.142 Test: test_sanitize ...passed 00:21:38.142 Test: test_directive ...passed 00:21:38.142 Test: test_nvme_request_add_abort ...passed 00:21:38.142 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:21:38.142 Test: test_nvme_ctrlr_cmd_identify ...passed 00:21:38.142 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:21:38.142 00:21:38.142 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.142 suites 1 1 n/a 0 0 00:21:38.142 tests 24 24 24 0 0 00:21:38.142 asserts 198 198 198 0 n/a 00:21:38.142 00:21:38.142 Elapsed time = 0.001 seconds 00:21:38.142 19:14:53 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:21:38.142 00:21:38.142 00:21:38.142 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.142 http://cunit.sourceforge.net/ 00:21:38.142 00:21:38.142 00:21:38.142 Suite: nvme_ctrlr_cmd 00:21:38.142 Test: test_geometry_cmd ...passed 00:21:38.142 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:21:38.142 00:21:38.142 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.142 suites 1 1 n/a 0 0 00:21:38.142 tests 2 2 2 0 0 00:21:38.142 asserts 7 7 7 0 n/a 00:21:38.142 00:21:38.142 Elapsed time = 0.000 seconds 00:21:38.142 19:14:53 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:21:38.142 00:21:38.142 00:21:38.142 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.142 http://cunit.sourceforge.net/ 00:21:38.142 00:21:38.142 00:21:38.142 Suite: nvme 00:21:38.142 Test: test_nvme_ns_construct ...passed 00:21:38.142 Test: test_nvme_ns_uuid ...passed 00:21:38.142 Test: test_nvme_ns_csi ...passed 00:21:38.142 Test: test_nvme_ns_data ...passed 00:21:38.142 Test: test_nvme_ns_set_identify_data ...passed 00:21:38.142 Test: test_spdk_nvme_ns_get_values ...passed 00:21:38.142 Test: test_spdk_nvme_ns_is_active ...passed 00:21:38.142 Test: spdk_nvme_ns_supports ...passed 00:21:38.142 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:21:38.142 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:21:38.142 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:21:38.142 Test: test_nvme_ns_find_id_desc ...passed 00:21:38.142 00:21:38.142 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.142 suites 1 1 n/a 0 0 00:21:38.142 tests 12 12 12 0 0 00:21:38.142 asserts 83 83 83 0 n/a 00:21:38.142 00:21:38.142 Elapsed time = 0.001 seconds 00:21:38.142 19:14:53 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:21:38.142 00:21:38.142 00:21:38.142 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.142 http://cunit.sourceforge.net/ 00:21:38.142 00:21:38.142 00:21:38.142 Suite: nvme_ns_cmd 00:21:38.142 Test: split_test ...passed 00:21:38.142 Test: split_test2 ...passed 00:21:38.142 Test: split_test3 ...passed 00:21:38.142 Test: split_test4 ...passed 00:21:38.142 Test: test_nvme_ns_cmd_flush ...passed 00:21:38.142 Test: test_nvme_ns_cmd_dataset_management ...passed 00:21:38.142 Test: test_nvme_ns_cmd_copy ...passed 00:21:38.142 Test: test_io_flags ...[2024-04-18 19:14:53.936007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:21:38.142 passed 00:21:38.142 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:21:38.142 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:21:38.143 Test: test_nvme_ns_cmd_reservation_register ...passed 00:21:38.143 Test: test_nvme_ns_cmd_reservation_release ...passed 00:21:38.143 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:21:38.143 Test: test_nvme_ns_cmd_reservation_report ...passed 00:21:38.143 Test: test_cmd_child_request ...passed 00:21:38.143 Test: test_nvme_ns_cmd_readv ...passed 00:21:38.143 Test: test_nvme_ns_cmd_read_with_md ...passed 00:21:38.143 Test: test_nvme_ns_cmd_writev ...[2024-04-18 19:14:53.937357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:21:38.143 passed 00:21:38.143 Test: test_nvme_ns_cmd_write_with_md ...passed 00:21:38.143 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:21:38.143 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:21:38.143 Test: test_nvme_ns_cmd_comparev ...passed 00:21:38.143 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:21:38.143 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:21:38.143 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:21:38.143 Test: test_nvme_ns_cmd_setup_request ...passed 00:21:38.143 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:21:38.143 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-04-18 19:14:53.939301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:21:38.143 passed 00:21:38.143 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-04-18 19:14:53.939433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:21:38.143 passed 00:21:38.143 Test: test_nvme_ns_cmd_verify ...passed 00:21:38.143 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:21:38.143 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:21:38.143 00:21:38.143 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.143 suites 1 1 n/a 0 0 00:21:38.143 tests 32 32 32 0 0 00:21:38.143 asserts 550 550 550 0 n/a 00:21:38.143 00:21:38.143 Elapsed time = 0.005 seconds 00:21:38.143 19:14:53 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:21:38.143 00:21:38.143 00:21:38.143 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.143 http://cunit.sourceforge.net/ 00:21:38.143 00:21:38.143 00:21:38.143 Suite: nvme_ns_cmd 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:21:38.143 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:21:38.143 00:21:38.143 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.143 suites 1 1 n/a 0 0 00:21:38.143 tests 12 12 12 0 0 00:21:38.143 asserts 123 123 123 0 n/a 00:21:38.143 00:21:38.143 Elapsed time = 0.001 seconds 00:21:38.143 19:14:53 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:21:38.143 00:21:38.143 00:21:38.143 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.143 http://cunit.sourceforge.net/ 00:21:38.143 00:21:38.143 00:21:38.143 Suite: nvme_qpair 00:21:38.143 Test: test3 ...passed 00:21:38.143 Test: test_ctrlr_failed ...passed 00:21:38.143 Test: struct_packing ...passed 00:21:38.143 Test: test_nvme_qpair_process_completions ...[2024-04-18 19:14:54.012816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.143 [2024-04-18 19:14:54.013193] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.143 [2024-04-18 19:14:54.013283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:38.143 passed 00:21:38.143 Test: test_nvme_completion_is_retry ...passed 00:21:38.143 Test: test_get_status_string ...passed 00:21:38.143 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-04-18 19:14:54.013369] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:38.143 passed 00:21:38.143 Test: test_nvme_qpair_submit_request ...passed 00:21:38.143 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:21:38.143 Test: test_nvme_qpair_manual_complete_request ...passed 00:21:38.143 Test: test_nvme_qpair_init_deinit ...[2024-04-18 19:14:54.013807] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.143 passed 00:21:38.143 Test: test_nvme_get_sgl_print_info ...passed 00:21:38.143 00:21:38.143 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.143 suites 1 1 n/a 0 0 00:21:38.143 tests 12 12 12 0 0 00:21:38.143 asserts 154 154 154 0 n/a 00:21:38.143 00:21:38.143 Elapsed time = 0.001 seconds 00:21:38.143 19:14:54 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:21:38.143 00:21:38.143 00:21:38.143 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.143 http://cunit.sourceforge.net/ 00:21:38.143 00:21:38.143 00:21:38.143 Suite: nvme_pcie 00:21:38.143 Test: test_prp_list_append ...[2024-04-18 19:14:54.051707] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:21:38.143 [2024-04-18 19:14:54.052095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:21:38.143 [2024-04-18 19:14:54.052136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:21:38.143 [2024-04-18 19:14:54.052418] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:21:38.143 passed 00:21:38.143 Test: test_nvme_pcie_hotplug_monitor ...[2024-04-18 19:14:54.052522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:21:38.143 passed 00:21:38.143 Test: test_shadow_doorbell_update ...passed 00:21:38.143 Test: test_build_contig_hw_sgl_request ...passed 00:21:38.143 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:21:38.143 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:21:38.143 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:21:38.143 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-04-18 19:14:54.052717] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:21:38.143 passed 00:21:38.143 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:21:38.143 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:21:38.143 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:21:38.143 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:21:38.143 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:21:38.143 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:21:38.143 00:21:38.143 [2024-04-18 19:14:54.052841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:21:38.143 [2024-04-18 19:14:54.052908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:21:38.143 [2024-04-18 19:14:54.052943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:21:38.143 [2024-04-18 19:14:54.052981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:21:38.143 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.143 suites 1 1 n/a 0 0 00:21:38.143 tests 14 14 14 0 0 00:21:38.143 asserts 235 235 235 0 n/a 00:21:38.143 00:21:38.143 Elapsed time = 0.001 seconds 00:21:38.401 19:14:54 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:21:38.401 00:21:38.401 00:21:38.401 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.401 http://cunit.sourceforge.net/ 00:21:38.401 00:21:38.401 00:21:38.401 Suite: nvme_ns_cmd 00:21:38.401 Test: nvme_poll_group_create_test ...passed 00:21:38.401 Test: nvme_poll_group_add_remove_test ...passed 00:21:38.401 Test: nvme_poll_group_process_completions ...passed 00:21:38.401 Test: nvme_poll_group_destroy_test ...passed 00:21:38.401 Test: nvme_poll_group_get_free_stats ...passed 00:21:38.401 00:21:38.401 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.401 suites 1 1 n/a 0 0 00:21:38.401 tests 5 5 5 0 0 00:21:38.401 asserts 75 75 75 0 n/a 00:21:38.401 00:21:38.401 Elapsed time = 0.000 seconds 00:21:38.401 19:14:54 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:21:38.401 00:21:38.401 00:21:38.401 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.401 http://cunit.sourceforge.net/ 00:21:38.401 00:21:38.401 00:21:38.401 Suite: nvme_quirks 00:21:38.401 Test: test_nvme_quirks_striping ...passed 00:21:38.401 00:21:38.401 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.401 suites 1 1 n/a 0 0 00:21:38.401 tests 1 1 1 0 0 00:21:38.401 asserts 5 5 5 0 n/a 00:21:38.401 00:21:38.401 Elapsed time = 0.000 seconds 00:21:38.401 19:14:54 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:21:38.401 00:21:38.401 00:21:38.401 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.401 http://cunit.sourceforge.net/ 00:21:38.401 00:21:38.401 00:21:38.401 Suite: nvme_tcp 00:21:38.401 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:21:38.401 Test: test_nvme_tcp_build_iovs ...passed 00:21:38.401 Test: test_nvme_tcp_build_sgl_request ...[2024-04-18 19:14:54.169253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffce27c1050, and the iovcnt=16, remaining_size=28672 00:21:38.401 passed 00:21:38.401 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:21:38.401 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:21:38.402 Test: test_nvme_tcp_req_complete_safe ...passed 00:21:38.402 Test: test_nvme_tcp_req_get ...passed 00:21:38.402 Test: test_nvme_tcp_req_init ...passed 00:21:38.402 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:21:38.402 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:21:38.402 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:21:38.402 Test: test_nvme_tcp_alloc_reqs ...[2024-04-18 19:14:54.169991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c2d70 is same with the state(6) to be set 00:21:38.402 passed 00:21:38.402 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:21:38.402 Test: test_nvme_tcp_pdu_ch_handle ...[2024-04-18 19:14:54.170306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c1f20 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.170356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffce27c2ab0 00:21:38.402 [2024-04-18 19:14:54.170394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1223:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:21:38.402 [2024-04-18 19:14:54.170471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c23e0 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.170527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1174:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:21:38.402 [2024-04-18 19:14:54.170600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c23e0 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.170636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:38.402 [2024-04-18 19:14:54.170660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c23e0 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.170694] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c23e0 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.170727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c23e0 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.170773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c23e0 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.170803] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c23e0 is same with the state(5) to be set 00:21:38.402 passed 00:21:38.402 Test: test_nvme_tcp_qpair_connect_sock ...[2024-04-18 19:14:54.170844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c23e0 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.171017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:21:38.402 [2024-04-18 19:14:54.171053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:21:38.402 [2024-04-18 19:14:54.171268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:21:38.402 passed 00:21:38.402 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:21:38.402 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:21:38.402 Test: test_nvme_tcp_icresp_handle ...[2024-04-18 19:14:54.171404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffce27c25f0): PDU Sequence Error 00:21:38.402 [2024-04-18 19:14:54.171461] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1564:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:21:38.402 [2024-04-18 19:14:54.171491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1571:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:21:38.402 [2024-04-18 19:14:54.171524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c1f30 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.171560] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1580:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:21:38.402 [2024-04-18 19:14:54.171592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c1f30 is same with the state(5) to be set 00:21:38.402 passed 00:21:38.402 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:21:38.402 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:21:38.402 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-04-18 19:14:54.171638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c1f30 is same with the state(0) to be set 00:21:38.402 [2024-04-18 19:14:54.171710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffce27c2ab0): PDU Sequence Error 00:21:38.402 [2024-04-18 19:14:54.171785] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1641:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffce27c11f0 00:21:38.402 passed 00:21:38.402 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:21:38.402 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-04-18 19:14:54.171930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffce27c0870, errno=0, rc=0 00:21:38.402 [2024-04-18 19:14:54.171983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c0870 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.172030] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffce27c0870 is same with the state(5) to be set 00:21:38.402 [2024-04-18 19:14:54.172078] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffce27c0870 (0): Success 00:21:38.402 [2024-04-18 19:14:54.172113] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffce27c0870 (0): Success 00:21:38.402 [2024-04-18 19:14:54.310149] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:21:38.402 [2024-04-18 19:14:54.310260] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:21:38.402 passed 00:21:38.402 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:21:38.402 Test: test_nvme_tcp_poll_group_get_stats ...[2024-04-18 19:14:54.310521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:21:38.402 [2024-04-18 19:14:54.310553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:21:38.402 passed 00:21:38.402 Test: test_nvme_tcp_ctrlr_construct ...[2024-04-18 19:14:54.310767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:21:38.402 [2024-04-18 19:14:54.310802] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:38.402 [2024-04-18 19:14:54.310919] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:21:38.402 [2024-04-18 19:14:54.310999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:38.402 [2024-04-18 19:14:54.311129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:21:38.402 passed 00:21:38.402 Test: test_nvme_tcp_qpair_submit_request ...[2024-04-18 19:14:54.311192] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:38.402 [2024-04-18 19:14:54.311360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:21:38.402 [2024-04-18 19:14:54.311426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1017:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:21:38.402 passed 00:21:38.402 00:21:38.402 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.402 suites 1 1 n/a 0 0 00:21:38.402 tests 27 27 27 0 0 00:21:38.402 asserts 624 624 624 0 n/a 00:21:38.402 00:21:38.402 Elapsed time = 0.142 seconds 00:21:38.660 19:14:54 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:21:38.660 00:21:38.660 00:21:38.660 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.660 http://cunit.sourceforge.net/ 00:21:38.660 00:21:38.660 00:21:38.660 Suite: nvme_transport 00:21:38.660 Test: test_nvme_get_transport ...passed 00:21:38.660 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:21:38.660 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:21:38.660 Test: test_nvme_transport_poll_group_add_remove ...passed 00:21:38.660 Test: test_ctrlr_get_memory_domains ...passed 00:21:38.660 00:21:38.660 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.660 suites 1 1 n/a 0 0 00:21:38.660 tests 5 5 5 0 0 00:21:38.660 asserts 28 28 28 0 n/a 00:21:38.660 00:21:38.660 Elapsed time = 0.000 seconds 00:21:38.660 19:14:54 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:21:38.660 00:21:38.660 00:21:38.660 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.660 http://cunit.sourceforge.net/ 00:21:38.660 00:21:38.660 00:21:38.660 Suite: nvme_io_msg 00:21:38.660 Test: test_nvme_io_msg_send ...passed 00:21:38.660 Test: test_nvme_io_msg_process ...passed 00:21:38.660 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:21:38.660 00:21:38.660 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.660 suites 1 1 n/a 0 0 00:21:38.660 tests 3 3 3 0 0 00:21:38.660 asserts 56 56 56 0 n/a 00:21:38.660 00:21:38.660 Elapsed time = 0.000 seconds 00:21:38.660 19:14:54 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:21:38.660 00:21:38.660 00:21:38.660 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.660 http://cunit.sourceforge.net/ 00:21:38.660 00:21:38.660 00:21:38.660 Suite: nvme_pcie_common 00:21:38.660 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-04-18 19:14:54.434112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:21:38.660 passed 00:21:38.660 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:21:38.660 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:21:38.661 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-04-18 19:14:54.435072] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:21:38.661 passed 00:21:38.661 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-04-18 19:14:54.435194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:21:38.661 [2024-04-18 19:14:54.435230] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:21:38.661 passed 00:21:38.661 Test: test_nvme_pcie_poll_group_get_stats ...[2024-04-18 19:14:54.435798] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:21:38.661 passed 00:21:38.661 00:21:38.661 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.661 suites 1 1 n/a 0 0 00:21:38.661 tests 6 6 6 0 0 00:21:38.661 asserts 148 148 148 0 n/a 00:21:38.661 00:21:38.661 Elapsed time = 0.002 seconds 00:21:38.661 [2024-04-18 19:14:54.435864] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:21:38.661 19:14:54 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:21:38.661 00:21:38.661 00:21:38.661 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.661 http://cunit.sourceforge.net/ 00:21:38.661 00:21:38.661 00:21:38.661 Suite: nvme_fabric 00:21:38.661 Test: test_nvme_fabric_prop_set_cmd ...passed 00:21:38.661 Test: test_nvme_fabric_prop_get_cmd ...passed 00:21:38.661 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:21:38.661 Test: test_nvme_fabric_discover_probe ...passed 00:21:38.661 Test: test_nvme_fabric_qpair_connect ...[2024-04-18 19:14:54.477155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:21:38.661 passed 00:21:38.661 00:21:38.661 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.661 suites 1 1 n/a 0 0 00:21:38.661 tests 5 5 5 0 0 00:21:38.661 asserts 60 60 60 0 n/a 00:21:38.661 00:21:38.661 Elapsed time = 0.001 seconds 00:21:38.661 19:14:54 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:21:38.661 00:21:38.661 00:21:38.661 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.661 http://cunit.sourceforge.net/ 00:21:38.661 00:21:38.661 00:21:38.661 Suite: nvme_opal 00:21:38.661 Test: test_opal_nvme_security_recv_send_done ...passed 00:21:38.661 Test: test_opal_add_short_atom_header ...passed 00:21:38.661 00:21:38.661 Run Summary: Type Total Ran Passed Failed Inactive 00:21:38.661 suites 1 1 n/a 0 0 00:21:38.661 tests 2 2 2 0 0 00:21:38.661 asserts 22 22 22 0 n/a 00:21:38.661 00:21:38.661 Elapsed time = 0.000 seconds 00:21:38.661 [2024-04-18 19:14:54.514652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:21:38.661 ************************************ 00:21:38.661 END TEST unittest_nvme 00:21:38.661 ************************************ 00:21:38.661 00:21:38.661 real 0m1.419s 00:21:38.661 user 0m0.793s 00:21:38.661 sys 0m0.487s 00:21:38.661 19:14:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:38.661 19:14:54 -- common/autotest_common.sh@10 -- # set +x 00:21:38.661 19:14:54 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:21:38.661 19:14:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:38.661 19:14:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:38.661 19:14:54 -- common/autotest_common.sh@10 -- # set +x 00:21:38.919 ************************************ 00:21:38.919 START TEST unittest_log 00:21:38.919 ************************************ 00:21:38.919 19:14:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:21:38.919 00:21:38.919 00:21:38.919 CUnit - A unit testing framework for C - Version 2.1-3 00:21:38.919 http://cunit.sourceforge.net/ 00:21:38.919 00:21:38.919 00:21:38.919 Suite: log 00:21:38.919 Test: log_test ...[2024-04-18 19:14:54.634428] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:21:38.919 [2024-04-18 19:14:54.634938] log_ut.c: 57:log_test: *DEBUG*: log test 00:21:38.919 log dump test: 00:21:38.919 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:21:38.919 spdk dump test: 00:21:38.919 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:21:38.919 spdk dump test: 00:21:38.919 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:21:38.919 00000010 65 20 63 68 61 72 73 e chars 00:21:38.919 passed 00:21:39.853 Test: deprecation ...passed 00:21:39.853 00:21:39.853 Run Summary: Type Total Ran Passed Failed Inactive 00:21:39.853 suites 1 1 n/a 0 0 00:21:39.853 tests 2 2 2 0 0 00:21:39.853 asserts 73 73 73 0 n/a 00:21:39.853 00:21:39.853 Elapsed time = 0.001 seconds 00:21:39.853 ************************************ 00:21:39.853 END TEST unittest_log 00:21:39.853 ************************************ 00:21:39.853 00:21:39.853 real 0m1.045s 00:21:39.853 user 0m0.015s 00:21:39.853 sys 0m0.028s 00:21:39.853 19:14:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.853 19:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:39.853 19:14:55 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:21:39.853 19:14:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:39.853 19:14:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.853 19:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:39.854 ************************************ 00:21:39.854 START TEST unittest_lvol 00:21:39.854 ************************************ 00:21:39.854 19:14:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:21:39.854 00:21:39.854 00:21:39.854 CUnit - A unit testing framework for C - Version 2.1-3 00:21:39.854 http://cunit.sourceforge.net/ 00:21:39.854 00:21:39.854 00:21:39.854 Suite: lvol 00:21:39.854 Test: lvs_init_unload_success ...[2024-04-18 19:14:55.775234] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:21:39.854 passed 00:21:39.854 Test: lvs_init_destroy_success ...[2024-04-18 19:14:55.776173] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:21:39.854 passed 00:21:39.854 Test: lvs_init_opts_success ...passed 00:21:39.854 Test: lvs_unload_lvs_is_null_fail ...[2024-04-18 19:14:55.777006] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:21:39.854 passed 00:21:39.854 Test: lvs_names ...[2024-04-18 19:14:55.777189] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:21:39.854 [2024-04-18 19:14:55.777370] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:21:39.854 [2024-04-18 19:14:55.777569] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:21:39.854 passed 00:21:39.854 Test: lvol_create_destroy_success ...passed 00:21:39.854 Test: lvol_create_fail ...[2024-04-18 19:14:55.778731] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:21:39.854 [2024-04-18 19:14:55.778958] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:21:39.854 passed 00:21:39.854 Test: lvol_destroy_fail ...[2024-04-18 19:14:55.779587] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:21:39.854 passed 00:21:39.854 Test: lvol_close ...[2024-04-18 19:14:55.780203] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:21:39.854 [2024-04-18 19:14:55.780413] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:21:39.854 passed 00:21:39.854 Test: lvol_resize ...passed 00:21:39.854 Test: lvol_set_read_only ...passed 00:21:39.854 Test: test_lvs_load ...[2024-04-18 19:14:55.781911] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:21:39.854 [2024-04-18 19:14:55.782063] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:21:39.854 passed 00:21:39.854 Test: lvols_load ...[2024-04-18 19:14:55.782762] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:21:39.854 [2024-04-18 19:14:55.782990] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:21:39.854 passed 00:21:39.854 Test: lvol_open ...passed 00:21:39.854 Test: lvol_snapshot ...passed 00:21:40.113 Test: lvol_snapshot_fail ...[2024-04-18 19:14:55.784706] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:21:40.113 passed 00:21:40.113 Test: lvol_clone ...passed 00:21:40.113 Test: lvol_clone_fail ...[2024-04-18 19:14:55.785838] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:21:40.113 passed 00:21:40.113 Test: lvol_iter_clones ...passed 00:21:40.113 Test: lvol_refcnt ...[2024-04-18 19:14:55.786875] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 3bad5eb2-ae6f-41d2-8771-75d7ac5f0eb9 because it is still open 00:21:40.113 passed 00:21:40.113 Test: lvol_names ...[2024-04-18 19:14:55.787587] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:21:40.113 [2024-04-18 19:14:55.787823] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:21:40.113 [2024-04-18 19:14:55.788195] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:21:40.113 passed 00:21:40.113 Test: lvol_create_thin_provisioned ...passed 00:21:40.113 Test: lvol_rename ...[2024-04-18 19:14:55.789336] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:21:40.113 [2024-04-18 19:14:55.789506] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:21:40.113 passed 00:21:40.113 Test: lvs_rename ...[2024-04-18 19:14:55.789977] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:21:40.113 passed 00:21:40.113 Test: lvol_inflate ...[2024-04-18 19:14:55.790523] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:21:40.113 passed 00:21:40.113 Test: lvol_decouple_parent ...[2024-04-18 19:14:55.791086] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:21:40.113 passed 00:21:40.113 Test: lvol_get_xattr ...passed 00:21:40.113 Test: lvol_esnap_reload ...passed 00:21:40.113 Test: lvol_esnap_create_bad_args ...[2024-04-18 19:14:55.792346] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:21:40.113 [2024-04-18 19:14:55.792466] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:21:40.113 [2024-04-18 19:14:55.792546] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:21:40.113 [2024-04-18 19:14:55.792797] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:21:40.114 [2024-04-18 19:14:55.793088] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:21:40.114 passed 00:21:40.114 Test: lvol_esnap_create_delete ...passed 00:21:40.114 Test: lvol_esnap_load_esnaps ...[2024-04-18 19:14:55.794066] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:21:40.114 passed 00:21:40.114 Test: lvol_esnap_missing ...[2024-04-18 19:14:55.794438] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:21:40.114 [2024-04-18 19:14:55.794575] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:21:40.114 passed 00:21:40.114 Test: lvol_esnap_hotplug ... 00:21:40.114 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:21:40.114 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:21:40.114 [2024-04-18 19:14:55.796024] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol acce5b15-804c-4d2c-8aa9-0bb4fa024af3: failed to create esnap bs_dev: error -12 00:21:40.114 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:21:40.114 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:21:40.114 [2024-04-18 19:14:55.796555] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol ee3652ae-4541-4620-aeba-c245059618fa: failed to create esnap bs_dev: error -12 00:21:40.114 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:21:40.114 [2024-04-18 19:14:55.797009] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 2e3f88b8-ffa8-4075-b551-6b1b0fc40ae4: failed to create esnap bs_dev: error -12 00:21:40.114 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:21:40.114 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:21:40.114 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:21:40.114 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:21:40.114 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:21:40.114 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:21:40.114 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:21:40.114 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:21:40.114 passed 00:21:40.114 Test: lvol_get_by ...passed 00:21:40.114 00:21:40.114 Run Summary: Type Total Ran Passed Failed Inactive 00:21:40.114 suites 1 1 n/a 0 0 00:21:40.114 tests 34 34 34 0 0 00:21:40.114 asserts 1439 1439 1439 0 n/a 00:21:40.114 00:21:40.114 Elapsed time = 0.015 seconds 00:21:40.114 00:21:40.114 real 0m0.067s 00:21:40.114 user 0m0.046s 00:21:40.114 sys 0m0.012s 00:21:40.114 19:14:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:40.114 19:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.114 ************************************ 00:21:40.114 END TEST unittest_lvol 00:21:40.114 ************************************ 00:21:40.114 19:14:55 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:21:40.114 19:14:55 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:21:40.114 19:14:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:40.114 19:14:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:40.114 19:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.114 ************************************ 00:21:40.114 START TEST unittest_nvme_rdma 00:21:40.114 ************************************ 00:21:40.114 19:14:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:21:40.114 00:21:40.114 00:21:40.114 CUnit - A unit testing framework for C - Version 2.1-3 00:21:40.114 http://cunit.sourceforge.net/ 00:21:40.114 00:21:40.114 00:21:40.114 Suite: nvme_rdma 00:21:40.114 Test: test_nvme_rdma_build_sgl_request ...[2024-04-18 19:14:55.942748] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:21:40.114 [2024-04-18 19:14:55.943240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:21:40.114 [2024-04-18 19:14:55.943486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:21:40.114 passed 00:21:40.114 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:21:40.114 Test: test_nvme_rdma_build_contig_request ...[2024-04-18 19:14:55.943996] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:21:40.114 passed 00:21:40.114 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:21:40.114 Test: test_nvme_rdma_create_reqs ...[2024-04-18 19:14:55.944400] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:21:40.114 passed 00:21:40.114 Test: test_nvme_rdma_create_rsps ...[2024-04-18 19:14:55.945017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:21:40.114 passed 00:21:40.114 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-04-18 19:14:55.945466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:21:40.114 [2024-04-18 19:14:55.945630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:21:40.114 passed 00:21:40.114 Test: test_nvme_rdma_poller_create ...passed 00:21:40.114 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-04-18 19:14:55.946236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:21:40.114 passed 00:21:40.114 Test: test_nvme_rdma_ctrlr_construct ...passed 00:21:40.114 Test: test_nvme_rdma_req_put_and_get ...passed 00:21:40.114 Test: test_nvme_rdma_req_init ...passed 00:21:40.114 Test: test_nvme_rdma_validate_cm_event ...[2024-04-18 19:14:55.947283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:21:40.114 [2024-04-18 19:14:55.947443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:21:40.114 passed 00:21:40.114 Test: test_nvme_rdma_qpair_init ...passed 00:21:40.114 Test: test_nvme_rdma_qpair_submit_request ...passed 00:21:40.114 Test: test_nvme_rdma_memory_domain ...[2024-04-18 19:14:55.948217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:21:40.114 passed 00:21:40.114 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:21:40.114 Test: test_rdma_get_memory_translation ...[2024-04-18 19:14:55.948675] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:21:40.114 [2024-04-18 19:14:55.948839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:21:40.114 passed 00:21:40.114 Test: test_get_rdma_qpair_from_wc ...passed 00:21:40.114 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:21:40.114 Test: test_nvme_rdma_poll_group_get_stats ...[2024-04-18 19:14:55.949290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:21:40.114 [2024-04-18 19:14:55.949416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:21:40.114 passed 00:21:40.114 Test: test_nvme_rdma_qpair_set_poller ...[2024-04-18 19:14:55.949762] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:21:40.114 [2024-04-18 19:14:55.949909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:21:40.114 [2024-04-18 19:14:55.950022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff36c7ee00 on poll group 0x60c000000040 00:21:40.114 [2024-04-18 19:14:55.950164] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:21:40.114 [2024-04-18 19:14:55.950302] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:21:40.114 [2024-04-18 19:14:55.950428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff36c7ee00 on poll group 0x60c000000040 00:21:40.114 [2024-04-18 19:14:55.950602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:21:40.114 passed 00:21:40.114 00:21:40.114 Run Summary: Type Total Ran Passed Failed Inactive 00:21:40.114 suites 1 1 n/a 0 0 00:21:40.114 tests 22 22 22 0 0 00:21:40.114 asserts 412 412 412 0 n/a 00:21:40.114 00:21:40.114 Elapsed time = 0.004 seconds 00:21:40.114 00:21:40.114 real 0m0.052s 00:21:40.114 user 0m0.038s 00:21:40.114 sys 0m0.009s 00:21:40.114 19:14:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:40.114 19:14:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.114 ************************************ 00:21:40.114 END TEST unittest_nvme_rdma 00:21:40.114 ************************************ 00:21:40.114 19:14:56 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:21:40.114 19:14:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:40.114 19:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:40.114 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.373 ************************************ 00:21:40.373 START TEST unittest_nvmf_transport 00:21:40.373 ************************************ 00:21:40.373 19:14:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:21:40.373 00:21:40.373 00:21:40.373 CUnit - A unit testing framework for C - Version 2.1-3 00:21:40.373 http://cunit.sourceforge.net/ 00:21:40.373 00:21:40.373 00:21:40.373 Suite: nvmf 00:21:40.373 Test: test_spdk_nvmf_transport_create ...[2024-04-18 19:14:56.085252] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 249:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:21:40.373 [2024-04-18 19:14:56.085979] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 269:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:21:40.374 [2024-04-18 19:14:56.086212] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 273:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:21:40.374 [2024-04-18 19:14:56.086592] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 256:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:21:40.374 passed 00:21:40.374 Test: test_nvmf_transport_poll_group_create ...passed 00:21:40.374 Test: test_spdk_nvmf_transport_opts_init ...[2024-04-18 19:14:56.087268] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 790:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:21:40.374 [2024-04-18 19:14:56.087549] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 795:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:21:40.374 [2024-04-18 19:14:56.087715] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 800:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:21:40.374 passed 00:21:40.374 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:21:40.374 00:21:40.374 Run Summary: Type Total Ran Passed Failed Inactive 00:21:40.374 suites 1 1 n/a 0 0 00:21:40.374 tests 4 4 4 0 0 00:21:40.374 asserts 49 49 49 0 n/a 00:21:40.374 00:21:40.374 Elapsed time = 0.002 seconds 00:21:40.374 00:21:40.374 real 0m0.061s 00:21:40.374 user 0m0.043s 00:21:40.374 sys 0m0.017s 00:21:40.374 19:14:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:40.374 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.374 ************************************ 00:21:40.374 END TEST unittest_nvmf_transport 00:21:40.374 ************************************ 00:21:40.374 19:14:56 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:21:40.374 19:14:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:40.374 19:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:40.374 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.374 ************************************ 00:21:40.374 START TEST unittest_rdma 00:21:40.374 ************************************ 00:21:40.374 19:14:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:21:40.374 00:21:40.374 00:21:40.374 CUnit - A unit testing framework for C - Version 2.1-3 00:21:40.374 http://cunit.sourceforge.net/ 00:21:40.374 00:21:40.374 00:21:40.374 Suite: rdma_common 00:21:40.374 Test: test_spdk_rdma_pd ...[2024-04-18 19:14:56.214742] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:21:40.374 [2024-04-18 19:14:56.215243] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:21:40.374 passed 00:21:40.374 00:21:40.374 Run Summary: Type Total Ran Passed Failed Inactive 00:21:40.374 suites 1 1 n/a 0 0 00:21:40.374 tests 1 1 1 0 0 00:21:40.374 asserts 31 31 31 0 n/a 00:21:40.374 00:21:40.374 Elapsed time = 0.001 seconds 00:21:40.374 00:21:40.374 real 0m0.031s 00:21:40.374 user 0m0.011s 00:21:40.374 sys 0m0.019s 00:21:40.374 19:14:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:40.374 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.374 ************************************ 00:21:40.374 END TEST unittest_rdma 00:21:40.374 ************************************ 00:21:40.374 19:14:56 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:21:40.374 19:14:56 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:21:40.374 19:14:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:40.374 19:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:40.374 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.631 ************************************ 00:21:40.631 START TEST unittest_nvme_cuse 00:21:40.631 ************************************ 00:21:40.631 19:14:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:21:40.631 00:21:40.631 00:21:40.632 CUnit - A unit testing framework for C - Version 2.1-3 00:21:40.632 http://cunit.sourceforge.net/ 00:21:40.632 00:21:40.632 00:21:40.632 Suite: nvme_cuse 00:21:40.632 Test: test_cuse_nvme_submit_io_read_write ...passed 00:21:40.632 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:21:40.632 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:21:40.632 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:21:40.632 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:21:40.632 Test: test_cuse_nvme_submit_io ...[2024-04-18 19:14:56.321191] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:21:40.632 passed 00:21:40.632 Test: test_cuse_nvme_reset ...[2024-04-18 19:14:56.321706] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:21:40.632 passed 00:21:41.567 Test: test_nvme_cuse_stop ...passed 00:21:41.567 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:21:41.567 00:21:41.567 Run Summary: Type Total Ran Passed Failed Inactive 00:21:41.567 suites 1 1 n/a 0 0 00:21:41.567 tests 9 9 9 0 0 00:21:41.567 asserts 118 118 118 0 n/a 00:21:41.567 00:21:41.567 Elapsed time = 1.001 seconds 00:21:41.567 ************************************ 00:21:41.567 END TEST unittest_nvme_cuse 00:21:41.567 ************************************ 00:21:41.567 00:21:41.567 real 0m1.038s 00:21:41.567 user 0m0.470s 00:21:41.567 sys 0m0.564s 00:21:41.567 19:14:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:41.567 19:14:57 -- common/autotest_common.sh@10 -- # set +x 00:21:41.567 19:14:57 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:21:41.567 19:14:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:41.567 19:14:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:41.567 19:14:57 -- common/autotest_common.sh@10 -- # set +x 00:21:41.567 ************************************ 00:21:41.567 START TEST unittest_nvmf 00:21:41.567 ************************************ 00:21:41.567 19:14:57 -- common/autotest_common.sh@1111 -- # unittest_nvmf 00:21:41.567 19:14:57 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:21:41.567 00:21:41.567 00:21:41.567 CUnit - A unit testing framework for C - Version 2.1-3 00:21:41.567 http://cunit.sourceforge.net/ 00:21:41.567 00:21:41.567 00:21:41.567 Suite: nvmf 00:21:41.567 Test: test_get_log_page ...[2024-04-18 19:14:57.473220] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2597:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:21:41.567 passed 00:21:41.567 Test: test_process_fabrics_cmd ...[2024-04-18 19:14:57.473888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4663:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:21:41.567 passed 00:21:41.567 Test: test_connect ...[2024-04-18 19:14:57.474831] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 991:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:21:41.567 [2024-04-18 19:14:57.475069] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 854:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:21:41.567 [2024-04-18 19:14:57.475188] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1030:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:21:41.567 [2024-04-18 19:14:57.475311] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 801:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:21:41.567 [2024-04-18 19:14:57.475537] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 865:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:21:41.567 [2024-04-18 19:14:57.475723] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 872:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:21:41.567 [2024-04-18 19:14:57.475846] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 878:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:21:41.567 [2024-04-18 19:14:57.475995] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:21:41.567 [2024-04-18 19:14:57.476245] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 740:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:21:41.567 [2024-04-18 19:14:57.476431] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 655:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:21:41.567 [2024-04-18 19:14:57.476831] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 661:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:21:41.567 [2024-04-18 19:14:57.477054] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 667:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:21:41.567 [2024-04-18 19:14:57.477271] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 674:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:21:41.567 [2024-04-18 19:14:57.477441] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 698:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:21:41.567 [2024-04-18 19:14:57.477622] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 278:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:21:41.567 [2024-04-18 19:14:57.478733] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 785:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:21:41.567 [2024-04-18 19:14:57.479028] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 785:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:21:41.567 passed 00:21:41.567 Test: test_get_ns_id_desc_list ...passed 00:21:41.567 Test: test_identify_ns ...[2024-04-18 19:14:57.480671] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2691:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:21:41.567 [2024-04-18 19:14:57.481286] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2691:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:21:41.567 [2024-04-18 19:14:57.481515] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2691:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:21:41.567 passed 00:21:41.567 Test: test_identify_ns_iocs_specific ...[2024-04-18 19:14:57.482052] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2691:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:21:41.567 [2024-04-18 19:14:57.482516] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2691:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:21:41.567 passed 00:21:41.567 Test: test_reservation_write_exclusive ...passed 00:21:41.567 Test: test_reservation_exclusive_access ...passed 00:21:41.567 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:21:41.567 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:21:41.567 Test: test_reservation_notification_log_page ...passed 00:21:41.567 Test: test_get_dif_ctx ...passed 00:21:41.567 Test: test_set_get_features ...[2024-04-18 19:14:57.484700] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1627:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:21:41.567 [2024-04-18 19:14:57.484900] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1627:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:21:41.567 [2024-04-18 19:14:57.485043] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1638:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:21:41.567 [2024-04-18 19:14:57.485191] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1714:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:21:41.567 passed 00:21:41.567 Test: test_identify_ctrlr ...passed 00:21:41.567 Test: test_identify_ctrlr_iocs_specific ...passed 00:21:41.567 Test: test_custom_admin_cmd ...passed 00:21:41.567 Test: test_fused_compare_and_write ...[2024-04-18 19:14:57.486609] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4198:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:21:41.567 [2024-04-18 19:14:57.486812] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4187:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:21:41.567 [2024-04-18 19:14:57.487018] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4205:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:21:41.567 passed 00:21:41.567 Test: test_multi_async_event_reqs ...passed 00:21:41.567 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:21:41.567 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:21:41.567 Test: test_multi_async_events ...passed 00:21:41.567 Test: test_rae ...passed 00:21:41.567 Test: test_nvmf_ctrlr_create_destruct ...passed 00:21:41.567 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:21:41.568 Test: test_spdk_nvmf_request_zcopy_start ...[2024-04-18 19:14:57.489033] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4663:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:21:41.568 passed 00:21:41.568 Test: test_zcopy_read ...passed 00:21:41.568 Test: test_zcopy_write ...passed 00:21:41.568 Test: test_nvmf_property_set ...passed 00:21:41.568 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-04-18 19:14:57.489904] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1925:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:21:41.568 passed[2024-04-18 19:14:57.490041] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1925:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:21:41.568 00:21:41.568 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-04-18 19:14:57.490215] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1948:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:21:41.568 [2024-04-18 19:14:57.490296] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1954:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:21:41.568 [2024-04-18 19:14:57.490380] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1966:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:21:41.568 passed 00:21:41.568 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:21:41.568 Test: test_nvmf_check_qpair_active ...passed[2024-04-18 19:14:57.490875] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4663:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:21:41.568 [2024-04-18 19:14:57.490969] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:21:41.568 00:21:41.568 00:21:41.568 Run Summary: Type Total Ran Passed Failed Inactive 00:21:41.568 suites 1 1 n/a 0 0 00:21:41.568 tests 32 32 32 0 0 00:21:41.568 asserts 977 977 977 0 n/a 00:21:41.568 00:21:41.568 Elapsed time = 0.009 seconds 00:21:41.827 19:14:57 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:21:41.827 00:21:41.827 00:21:41.827 CUnit - A unit testing framework for C - Version 2.1-3 00:21:41.827 http://cunit.sourceforge.net/ 00:21:41.827 00:21:41.827 00:21:41.827 Suite: nvmf 00:21:41.827 Test: test_get_rw_params ...passed 00:21:41.827 Test: test_lba_in_range ...passed 00:21:41.827 Test: test_get_dif_ctx ...passed 00:21:41.827 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:21:41.827 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-04-18 19:14:57.531214] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:21:41.827 [2024-04-18 19:14:57.531708] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:21:41.827 [2024-04-18 19:14:57.532070] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:21:41.827 passed 00:21:41.827 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-04-18 19:14:57.532339] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:21:41.827 [2024-04-18 19:14:57.532518] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 960:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:21:41.827 passed 00:21:41.827 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-04-18 19:14:57.532789] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:21:41.827 [2024-04-18 19:14:57.532893] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:21:41.827 [2024-04-18 19:14:57.532987] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:21:41.827 [2024-04-18 19:14:57.533099] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:21:41.827 passed 00:21:41.827 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:21:41.827 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:21:41.827 00:21:41.827 Run Summary: Type Total Ran Passed Failed Inactive 00:21:41.827 suites 1 1 n/a 0 0 00:21:41.827 tests 9 9 9 0 0 00:21:41.827 asserts 157 157 157 0 n/a 00:21:41.827 00:21:41.827 Elapsed time = 0.001 seconds 00:21:41.827 19:14:57 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:21:41.827 00:21:41.827 00:21:41.827 CUnit - A unit testing framework for C - Version 2.1-3 00:21:41.827 http://cunit.sourceforge.net/ 00:21:41.827 00:21:41.827 00:21:41.827 Suite: nvmf 00:21:41.827 Test: test_discovery_log ...passed 00:21:41.827 Test: test_discovery_log_with_filters ...passed 00:21:41.827 00:21:41.827 Run Summary: Type Total Ran Passed Failed Inactive 00:21:41.827 suites 1 1 n/a 0 0 00:21:41.827 tests 2 2 2 0 0 00:21:41.827 asserts 238 238 238 0 n/a 00:21:41.827 00:21:41.827 Elapsed time = 0.003 seconds 00:21:41.827 19:14:57 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:21:41.827 00:21:41.827 00:21:41.827 CUnit - A unit testing framework for C - Version 2.1-3 00:21:41.827 http://cunit.sourceforge.net/ 00:21:41.827 00:21:41.827 00:21:41.827 Suite: nvmf 00:21:41.828 Test: nvmf_test_create_subsystem ...[2024-04-18 19:14:57.619163] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:21:41.828 [2024-04-18 19:14:57.619677] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:21:41.828 [2024-04-18 19:14:57.619887] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:21:41.828 [2024-04-18 19:14:57.620019] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:21:41.828 [2024-04-18 19:14:57.620132] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:21:41.828 [2024-04-18 19:14:57.620284] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:21:41.828 [2024-04-18 19:14:57.620503] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:21:41.828 [2024-04-18 19:14:57.620811] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:21:41.828 [2024-04-18 19:14:57.621021] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:21:41.828 [2024-04-18 19:14:57.621156] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:21:41.828 [2024-04-18 19:14:57.621266] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:21:41.828 passed 00:21:41.828 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-04-18 19:14:57.621637] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1982:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:21:41.828 [2024-04-18 19:14:57.621867] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1963:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:21:41.828 passed 00:21:41.828 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:21:41.828 Test: test_spdk_nvmf_ns_visible ...[2024-04-18 19:14:57.622508] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:21:41.828 passed 00:21:41.828 Test: test_reservation_register ...[2024-04-18 19:14:57.623306] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3014:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:21:41.828 [2024-04-18 19:14:57.623558] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3072:nvmf_ns_reservation_register: *ERROR*: No registrant 00:21:41.828 passed 00:21:41.828 Test: test_reservation_register_with_ptpl ...passed 00:21:41.828 Test: test_reservation_acquire_preempt_1 ...[2024-04-18 19:14:57.625071] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3014:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:21:41.828 passed 00:21:41.828 Test: test_reservation_acquire_release_with_ptpl ...passed 00:21:41.828 Test: test_reservation_release ...[2024-04-18 19:14:57.627004] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3014:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:21:41.828 passed 00:21:41.828 Test: test_reservation_unregister_notification ...[2024-04-18 19:14:57.627682] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3014:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:21:41.828 passed 00:21:41.828 Test: test_reservation_release_notification ...[2024-04-18 19:14:57.628229] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3014:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:21:41.828 passed 00:21:41.828 Test: test_reservation_release_notification_write_exclusive ...[2024-04-18 19:14:57.628775] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3014:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:21:41.828 passed 00:21:41.828 Test: test_reservation_clear_notification ...[2024-04-18 19:14:57.629321] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3014:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:21:41.828 passed 00:21:41.828 Test: test_reservation_preempt_notification ...[2024-04-18 19:14:57.629812] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3014:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:21:41.828 passed 00:21:41.828 Test: test_spdk_nvmf_ns_event ...passed 00:21:41.828 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:21:41.828 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:21:41.828 Test: test_spdk_nvmf_subsystem_add_host ...[2024-04-18 19:14:57.631306] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 262:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:21:41.828 [2024-04-18 19:14:57.631522] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1011:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:21:41.828 passed 00:21:41.828 Test: test_nvmf_ns_reservation_report ...[2024-04-18 19:14:57.631963] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3377:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:21:41.828 passed 00:21:41.828 Test: test_nvmf_nqn_is_valid ...[2024-04-18 19:14:57.632303] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:21:41.828 [2024-04-18 19:14:57.632449] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:f6e110af-544d-46de-b706-2304610e5fc": uuid is not the correct length 00:21:41.828 [2024-04-18 19:14:57.632612] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:21:41.828 passed 00:21:41.828 Test: test_nvmf_ns_reservation_restore ...[2024-04-18 19:14:57.633044] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2571:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:21:41.828 passed 00:21:41.828 Test: test_nvmf_subsystem_state_change ...passed 00:21:41.828 Test: test_nvmf_reservation_custom_ops ...passed 00:21:41.828 00:21:41.828 Run Summary: Type Total Ran Passed Failed Inactive 00:21:41.828 suites 1 1 n/a 0 0 00:21:41.828 tests 23 23 23 0 0 00:21:41.828 asserts 482 482 482 0 n/a 00:21:41.828 00:21:41.828 Elapsed time = 0.010 seconds 00:21:41.828 19:14:57 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:21:41.828 00:21:41.828 00:21:41.828 CUnit - A unit testing framework for C - Version 2.1-3 00:21:41.828 http://cunit.sourceforge.net/ 00:21:41.828 00:21:41.828 00:21:41.828 Suite: nvmf 00:21:41.828 Test: test_nvmf_tcp_create ...[2024-04-18 19:14:57.713683] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 742:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:21:41.828 passed 00:21:41.828 Test: test_nvmf_tcp_destroy ...passed 00:21:42.087 Test: test_nvmf_tcp_poll_group_create ...passed 00:21:42.087 Test: test_nvmf_tcp_send_c2h_data ...passed 00:21:42.087 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:21:42.087 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:21:42.087 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:21:42.087 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-04-18 19:14:57.837176] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.087 [2024-04-18 19:14:57.837290] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73970 is same with the state(5) to be set 00:21:42.087 [2024-04-18 19:14:57.837429] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73970 is same with the state(5) to be set 00:21:42.087 [2024-04-18 19:14:57.837571] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.087 [2024-04-18 19:14:57.837631] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73970 is same with the state(5) to be set 00:21:42.087 passed 00:21:42.087 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:21:42.087 Test: test_nvmf_tcp_icreq_handle ...[2024-04-18 19:14:57.838371] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2102:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:21:42.087 [2024-04-18 19:14:57.838543] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.087 [2024-04-18 19:14:57.838697] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73970 is same with the state(5) to be set 00:21:42.087 [2024-04-18 19:14:57.838833] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2102:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:21:42.087 [2024-04-18 19:14:57.838969] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73970 is same with the state(5) to be set 00:21:42.087 [2024-04-18 19:14:57.839092] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.087 [2024-04-18 19:14:57.839170] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73970 is same with the state(5) to be set 00:21:42.087 [2024-04-18 19:14:57.839339] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:21:42.087 [2024-04-18 19:14:57.839527] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73970 is same with the state(5) to be set 00:21:42.087 passed 00:21:42.087 Test: test_nvmf_tcp_check_xfer_type ...passed 00:21:42.087 Test: test_nvmf_tcp_invalid_sgl ...[2024-04-18 19:14:57.840058] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2497:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:21:42.087 [2024-04-18 19:14:57.840195] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.087 [2024-04-18 19:14:57.840320] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73970 is same with the state(5) to be set 00:21:42.087 passed 00:21:42.087 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-04-18 19:14:57.840588] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2229:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc16c746d0 00:21:42.087 [2024-04-18 19:14:57.840781] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.087 [2024-04-18 19:14:57.840927] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.087 [2024-04-18 19:14:57.841042] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2286:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc16c73e30 00:21:42.087 [2024-04-18 19:14:57.841144] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.087 [2024-04-18 19:14:57.841211] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.087 [2024-04-18 19:14:57.841347] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2239:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:21:42.087 [2024-04-18 19:14:57.841416] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.087 [2024-04-18 19:14:57.841493] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.087 [2024-04-18 19:14:57.841619] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2278:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:21:42.088 [2024-04-18 19:14:57.841736] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.088 [2024-04-18 19:14:57.841860] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.088 [2024-04-18 19:14:57.841972] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.088 [2024-04-18 19:14:57.842085] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.088 [2024-04-18 19:14:57.842225] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.088 [2024-04-18 19:14:57.842351] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.088 [2024-04-18 19:14:57.842485] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.088 [2024-04-18 19:14:57.842541] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.088 [2024-04-18 19:14:57.842689] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.088 [2024-04-18 19:14:57.842797] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.088 [2024-04-18 19:14:57.842927] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.088 [2024-04-18 19:14:57.842989] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.088 [2024-04-18 19:14:57.843163] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:21:42.088 [2024-04-18 19:14:57.843278] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc16c73e30 is same with the state(5) to be set 00:21:42.088 passed 00:21:42.088 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:21:42.088 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-04-18 19:14:57.869499] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:21:42.088 [2024-04-18 19:14:57.869619] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:21:42.088 passed 00:21:42.088 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-04-18 19:14:57.870275] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:21:42.088 [2024-04-18 19:14:57.870429] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:21:42.088 passed 00:21:42.088 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-04-18 19:14:57.870815] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:21:42.088 [2024-04-18 19:14:57.870957] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:21:42.088 passed 00:21:42.088 00:21:42.088 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.088 suites 1 1 n/a 0 0 00:21:42.088 tests 17 17 17 0 0 00:21:42.088 asserts 222 222 222 0 n/a 00:21:42.088 00:21:42.088 Elapsed time = 0.180 seconds 00:21:42.088 19:14:57 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:21:42.088 00:21:42.088 00:21:42.088 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.088 http://cunit.sourceforge.net/ 00:21:42.088 00:21:42.088 00:21:42.088 Suite: nvmf 00:21:42.346 Test: test_nvmf_tgt_create_poll_group ...passed 00:21:42.346 00:21:42.346 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.346 suites 1 1 n/a 0 0 00:21:42.346 tests 1 1 1 0 0 00:21:42.346 asserts 17 17 17 0 n/a 00:21:42.346 00:21:42.346 Elapsed time = 0.035 seconds 00:21:42.346 ************************************ 00:21:42.346 END TEST unittest_nvmf 00:21:42.346 ************************************ 00:21:42.346 00:21:42.346 real 0m0.653s 00:21:42.347 user 0m0.309s 00:21:42.347 sys 0m0.320s 00:21:42.347 19:14:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:42.347 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:42.347 19:14:58 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:21:42.347 19:14:58 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:21:42.347 19:14:58 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:21:42.347 19:14:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:42.347 19:14:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:42.347 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:42.347 ************************************ 00:21:42.347 START TEST unittest_nvmf_rdma 00:21:42.347 ************************************ 00:21:42.347 19:14:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:21:42.347 00:21:42.347 00:21:42.347 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.347 http://cunit.sourceforge.net/ 00:21:42.347 00:21:42.347 00:21:42.347 Suite: nvmf 00:21:42.347 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-04-18 19:14:58.215767] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:21:42.347 [2024-04-18 19:14:58.216231] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:21:42.347 [2024-04-18 19:14:58.216360] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:21:42.347 passed 00:21:42.347 Test: test_spdk_nvmf_rdma_request_process ...passed 00:21:42.347 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:21:42.347 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:21:42.347 Test: test_nvmf_rdma_opts_init ...passed 00:21:42.347 Test: test_nvmf_rdma_request_free_data ...passed 00:21:42.347 Test: test_nvmf_rdma_update_ibv_state ...[2024-04-18 19:14:58.218840] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:21:42.347 passed[2024-04-18 19:14:58.218933] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:21:42.347 00:21:42.347 Test: test_nvmf_rdma_resources_create ...passed 00:21:42.347 Test: test_nvmf_rdma_qpair_compare ...passed 00:21:42.347 Test: test_nvmf_rdma_resize_cq ...[2024-04-18 19:14:58.220871] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:21:42.347 Using CQ of insufficient size may lead to CQ overrun 00:21:42.347 [2024-04-18 19:14:58.221080] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:21:42.347 [2024-04-18 19:14:58.221215] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:21:42.347 passed 00:21:42.347 00:21:42.347 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.347 suites 1 1 n/a 0 0 00:21:42.347 tests 10 10 10 0 0 00:21:42.347 asserts 584 584 584 0 n/a 00:21:42.347 00:21:42.347 Elapsed time = 0.004 seconds 00:21:42.347 00:21:42.347 real 0m0.055s 00:21:42.347 user 0m0.025s 00:21:42.347 sys 0m0.028s 00:21:42.347 19:14:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:42.347 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:42.347 ************************************ 00:21:42.347 END TEST unittest_nvmf_rdma 00:21:42.347 ************************************ 00:21:42.605 19:14:58 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:21:42.605 19:14:58 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:21:42.605 19:14:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:42.606 19:14:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:42.606 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:42.606 ************************************ 00:21:42.606 START TEST unittest_scsi 00:21:42.606 ************************************ 00:21:42.606 19:14:58 -- common/autotest_common.sh@1111 -- # unittest_scsi 00:21:42.606 19:14:58 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:21:42.606 00:21:42.606 00:21:42.606 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.606 http://cunit.sourceforge.net/ 00:21:42.606 00:21:42.606 00:21:42.606 Suite: dev_suite 00:21:42.606 Test: dev_destruct_null_dev ...passed 00:21:42.606 Test: dev_destruct_zero_luns ...passed 00:21:42.606 Test: dev_destruct_null_lun ...passed 00:21:42.606 Test: dev_destruct_success ...passed 00:21:42.606 Test: dev_construct_num_luns_zero ...[2024-04-18 19:14:58.354997] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:21:42.606 passed 00:21:42.606 Test: dev_construct_no_lun_zero ...[2024-04-18 19:14:58.355624] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:21:42.606 passed 00:21:42.606 Test: dev_construct_null_lun ...[2024-04-18 19:14:58.355871] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:21:42.606 passed 00:21:42.606 Test: dev_construct_name_too_long ...[2024-04-18 19:14:58.356116] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:21:42.606 passed 00:21:42.606 Test: dev_construct_success ...passed 00:21:42.606 Test: dev_construct_success_lun_zero_not_first ...passed 00:21:42.606 Test: dev_queue_mgmt_task_success ...passed 00:21:42.606 Test: dev_queue_task_success ...passed 00:21:42.606 Test: dev_stop_success ...passed 00:21:42.606 Test: dev_add_port_max_ports ...[2024-04-18 19:14:58.357224] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:21:42.606 passed 00:21:42.606 Test: dev_add_port_construct_failure1 ...[2024-04-18 19:14:58.357568] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:21:42.606 passed 00:21:42.606 Test: dev_add_port_construct_failure2 ...[2024-04-18 19:14:58.357836] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:21:42.606 passed 00:21:42.606 Test: dev_add_port_success1 ...passed 00:21:42.606 Test: dev_add_port_success2 ...passed 00:21:42.606 Test: dev_add_port_success3 ...passed 00:21:42.606 Test: dev_find_port_by_id_num_ports_zero ...passed 00:21:42.606 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:21:42.606 Test: dev_find_port_by_id_success ...passed 00:21:42.606 Test: dev_add_lun_bdev_not_found ...passed 00:21:42.606 Test: dev_add_lun_no_free_lun_id ...[2024-04-18 19:14:58.359128] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:21:42.606 passed 00:21:42.606 Test: dev_add_lun_success1 ...passed 00:21:42.606 Test: dev_add_lun_success2 ...passed 00:21:42.606 Test: dev_check_pending_tasks ...passed 00:21:42.606 Test: dev_iterate_luns ...passed 00:21:42.606 Test: dev_find_free_lun ...passed 00:21:42.606 00:21:42.606 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.606 suites 1 1 n/a 0 0 00:21:42.606 tests 29 29 29 0 0 00:21:42.606 asserts 97 97 97 0 n/a 00:21:42.606 00:21:42.606 Elapsed time = 0.003 seconds 00:21:42.606 19:14:58 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:21:42.606 00:21:42.606 00:21:42.606 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.606 http://cunit.sourceforge.net/ 00:21:42.606 00:21:42.606 00:21:42.606 Suite: lun_suite 00:21:42.606 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-04-18 19:14:58.401966] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:21:42.606 passed 00:21:42.606 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-04-18 19:14:58.402631] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:21:42.606 passed 00:21:42.606 Test: lun_task_mgmt_execute_lun_reset ...passed 00:21:42.606 Test: lun_task_mgmt_execute_target_reset ...passed 00:21:42.606 Test: lun_task_mgmt_execute_invalid_case ...[2024-04-18 19:14:58.403241] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:21:42.606 passed 00:21:42.606 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:21:42.606 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:21:42.606 Test: lun_append_task_null_lun_not_supported ...passed 00:21:42.606 Test: lun_execute_scsi_task_pending ...passed 00:21:42.606 Test: lun_execute_scsi_task_complete ...passed 00:21:42.606 Test: lun_execute_scsi_task_resize ...passed 00:21:42.606 Test: lun_destruct_success ...passed 00:21:42.606 Test: lun_construct_null_ctx ...[2024-04-18 19:14:58.404382] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:21:42.606 passed 00:21:42.606 Test: lun_construct_success ...passed 00:21:42.606 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:21:42.606 Test: lun_reset_task_suspend_scsi_task ...passed 00:21:42.606 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:21:42.606 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:21:42.606 00:21:42.606 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.606 suites 1 1 n/a 0 0 00:21:42.606 tests 18 18 18 0 0 00:21:42.606 asserts 153 153 153 0 n/a 00:21:42.606 00:21:42.606 Elapsed time = 0.002 seconds 00:21:42.606 19:14:58 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:21:42.606 00:21:42.606 00:21:42.606 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.606 http://cunit.sourceforge.net/ 00:21:42.606 00:21:42.606 00:21:42.606 Suite: scsi_suite 00:21:42.606 Test: scsi_init ...passed 00:21:42.606 00:21:42.606 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.606 suites 1 1 n/a 0 0 00:21:42.606 tests 1 1 1 0 0 00:21:42.606 asserts 1 1 1 0 n/a 00:21:42.606 00:21:42.606 Elapsed time = 0.000 seconds 00:21:42.606 19:14:58 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:21:42.606 00:21:42.606 00:21:42.606 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.606 http://cunit.sourceforge.net/ 00:21:42.606 00:21:42.606 00:21:42.606 Suite: translation_suite 00:21:42.606 Test: mode_select_6_test ...passed 00:21:42.606 Test: mode_select_6_test2 ...passed 00:21:42.606 Test: mode_sense_6_test ...passed 00:21:42.606 Test: mode_sense_10_test ...passed 00:21:42.606 Test: inquiry_evpd_test ...passed 00:21:42.606 Test: inquiry_standard_test ...passed 00:21:42.606 Test: inquiry_overflow_test ...passed 00:21:42.606 Test: task_complete_test ...passed 00:21:42.606 Test: lba_range_test ...passed 00:21:42.607 Test: xfer_len_test ...[2024-04-18 19:14:58.492237] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:21:42.607 passed 00:21:42.607 Test: xfer_test ...passed 00:21:42.607 Test: scsi_name_padding_test ...passed 00:21:42.607 Test: get_dif_ctx_test ...passed 00:21:42.607 Test: unmap_split_test ...passed 00:21:42.607 00:21:42.607 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.607 suites 1 1 n/a 0 0 00:21:42.607 tests 14 14 14 0 0 00:21:42.607 asserts 1205 1205 1205 0 n/a 00:21:42.607 00:21:42.607 Elapsed time = 0.004 seconds 00:21:42.607 19:14:58 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:21:42.607 00:21:42.607 00:21:42.607 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.607 http://cunit.sourceforge.net/ 00:21:42.607 00:21:42.607 00:21:42.607 Suite: reservation_suite 00:21:42.607 Test: test_reservation_register ...[2024-04-18 19:14:58.528097] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:21:42.607 passed 00:21:42.607 Test: test_reservation_reserve ...[2024-04-18 19:14:58.528765] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:21:42.607 [2024-04-18 19:14:58.528954] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:21:42.607 [2024-04-18 19:14:58.529161] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:21:42.607 passed 00:21:42.607 Test: test_reservation_preempt_non_all_regs ...[2024-04-18 19:14:58.529431] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:21:42.607 [2024-04-18 19:14:58.529588] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:21:42.607 passed 00:21:42.607 Test: test_reservation_preempt_all_regs ...[2024-04-18 19:14:58.529991] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:21:42.607 passed 00:21:42.607 Test: test_reservation_cmds_conflict ...[2024-04-18 19:14:58.530386] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:21:42.607 [2024-04-18 19:14:58.530523] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:21:42.607 [2024-04-18 19:14:58.530658] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:21:42.607 [2024-04-18 19:14:58.530713] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:21:42.607 [2024-04-18 19:14:58.530849] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:21:42.607 [2024-04-18 19:14:58.530939] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:21:42.607 passed 00:21:42.607 Test: test_scsi2_reserve_release ...passed 00:21:42.607 Test: test_pr_with_scsi2_reserve_release ...[2024-04-18 19:14:58.531315] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:21:42.607 passed 00:21:42.607 00:21:42.607 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.607 suites 1 1 n/a 0 0 00:21:42.607 tests 7 7 7 0 0 00:21:42.607 asserts 257 257 257 0 n/a 00:21:42.607 00:21:42.607 Elapsed time = 0.002 seconds 00:21:42.864 00:21:42.864 real 0m0.216s 00:21:42.864 user 0m0.089s 00:21:42.864 sys 0m0.117s 00:21:42.865 19:14:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:42.865 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:42.865 ************************************ 00:21:42.865 END TEST unittest_scsi 00:21:42.865 ************************************ 00:21:42.865 19:14:58 -- unit/unittest.sh@276 -- # uname -s 00:21:42.865 19:14:58 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:21:42.865 19:14:58 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:21:42.865 19:14:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:42.865 19:14:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:42.865 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:42.865 ************************************ 00:21:42.865 START TEST unittest_sock 00:21:42.865 ************************************ 00:21:42.865 19:14:58 -- common/autotest_common.sh@1111 -- # unittest_sock 00:21:42.865 19:14:58 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:21:42.865 00:21:42.865 00:21:42.865 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.865 http://cunit.sourceforge.net/ 00:21:42.865 00:21:42.865 00:21:42.865 Suite: sock 00:21:42.865 Test: posix_sock ...passed 00:21:42.865 Test: ut_sock ...passed 00:21:42.865 Test: posix_sock_group ...passed 00:21:42.865 Test: ut_sock_group ...passed 00:21:42.865 Test: posix_sock_group_fairness ...passed 00:21:42.865 Test: _posix_sock_close ...passed 00:21:42.865 Test: sock_get_default_opts ...passed 00:21:42.865 Test: ut_sock_impl_get_set_opts ...passed 00:21:42.865 Test: posix_sock_impl_get_set_opts ...passed 00:21:42.865 Test: ut_sock_map ...passed 00:21:42.865 Test: override_impl_opts ...passed 00:21:42.865 Test: ut_sock_group_get_ctx ...passed 00:21:42.865 00:21:42.865 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.865 suites 1 1 n/a 0 0 00:21:42.865 tests 12 12 12 0 0 00:21:42.865 asserts 349 349 349 0 n/a 00:21:42.865 00:21:42.865 Elapsed time = 0.007 seconds 00:21:42.865 19:14:58 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:21:42.865 00:21:42.865 00:21:42.865 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.865 http://cunit.sourceforge.net/ 00:21:42.865 00:21:42.865 00:21:42.865 Suite: posix 00:21:42.865 Test: flush ...passed 00:21:42.865 00:21:42.865 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.865 suites 1 1 n/a 0 0 00:21:42.865 tests 1 1 1 0 0 00:21:42.865 asserts 28 28 28 0 n/a 00:21:42.865 00:21:42.865 Elapsed time = 0.000 seconds 00:21:42.865 19:14:58 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:21:42.865 00:21:42.865 real 0m0.117s 00:21:42.865 user 0m0.032s 00:21:42.865 sys 0m0.059s 00:21:42.865 19:14:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:42.865 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:42.865 ************************************ 00:21:42.865 END TEST unittest_sock 00:21:42.865 ************************************ 00:21:43.122 19:14:58 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:21:43.122 19:14:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:43.122 19:14:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.122 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:43.122 ************************************ 00:21:43.122 START TEST unittest_thread 00:21:43.122 ************************************ 00:21:43.122 19:14:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:21:43.122 00:21:43.122 00:21:43.122 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.122 http://cunit.sourceforge.net/ 00:21:43.122 00:21:43.122 00:21:43.122 Suite: io_channel 00:21:43.122 Test: thread_alloc ...passed 00:21:43.122 Test: thread_send_msg ...passed 00:21:43.122 Test: thread_poller ...passed 00:21:43.122 Test: poller_pause ...passed 00:21:43.122 Test: thread_for_each ...passed 00:21:43.122 Test: for_each_channel_remove ...passed 00:21:43.122 Test: for_each_channel_unreg ...[2024-04-18 19:14:58.886061] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffe074ab440 already registered (old:0x613000000200 new:0x6130000003c0) 00:21:43.122 passed 00:21:43.122 Test: thread_name ...passed 00:21:43.122 Test: channel ...[2024-04-18 19:14:58.891059] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x5567294259e0 00:21:43.122 passed 00:21:43.122 Test: channel_destroy_races ...passed 00:21:43.122 Test: thread_exit_test ...[2024-04-18 19:14:58.896929] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:21:43.122 passed 00:21:43.122 Test: thread_update_stats_test ...passed 00:21:43.122 Test: nested_channel ...passed 00:21:43.122 Test: device_unregister_and_thread_exit_race ...passed 00:21:43.122 Test: cache_closest_timed_poller ...passed 00:21:43.122 Test: multi_timed_pollers_have_same_expiration ...passed 00:21:43.122 Test: io_device_lookup ...passed 00:21:43.122 Test: spdk_spin ...[2024-04-18 19:14:58.910198] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:21:43.122 [2024-04-18 19:14:58.910377] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe074ab430 00:21:43.122 [2024-04-18 19:14:58.910558] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:21:43.122 [2024-04-18 19:14:58.912365] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:21:43.123 [2024-04-18 19:14:58.912542] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe074ab430 00:21:43.123 [2024-04-18 19:14:58.912698] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:21:43.123 [2024-04-18 19:14:58.912837] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe074ab430 00:21:43.123 [2024-04-18 19:14:58.912896] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:21:43.123 [2024-04-18 19:14:58.913088] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe074ab430 00:21:43.123 [2024-04-18 19:14:58.913192] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:21:43.123 [2024-04-18 19:14:58.913326] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe074ab430 00:21:43.123 passed 00:21:43.123 Test: for_each_channel_and_thread_exit_race ...passed 00:21:43.123 Test: for_each_thread_and_thread_exit_race ...passed 00:21:43.123 00:21:43.123 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.123 suites 1 1 n/a 0 0 00:21:43.123 tests 20 20 20 0 0 00:21:43.123 asserts 409 409 409 0 n/a 00:21:43.123 00:21:43.123 Elapsed time = 0.053 seconds 00:21:43.123 00:21:43.123 real 0m0.104s 00:21:43.123 user 0m0.050s 00:21:43.123 sys 0m0.050s 00:21:43.123 19:14:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.123 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:43.123 ************************************ 00:21:43.123 END TEST unittest_thread 00:21:43.123 ************************************ 00:21:43.123 19:14:58 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:21:43.123 19:14:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:43.123 19:14:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.123 19:14:58 -- common/autotest_common.sh@10 -- # set +x 00:21:43.123 ************************************ 00:21:43.123 START TEST unittest_iobuf 00:21:43.123 ************************************ 00:21:43.123 19:14:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:21:43.123 00:21:43.123 00:21:43.123 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.123 http://cunit.sourceforge.net/ 00:21:43.123 00:21:43.123 00:21:43.123 Suite: io_channel 00:21:43.391 Test: iobuf ...passed 00:21:43.391 Test: iobuf_cache ...[2024-04-18 19:14:59.057740] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:21:43.391 [2024-04-18 19:14:59.058222] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:21:43.391 [2024-04-18 19:14:59.058460] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 323:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:21:43.391 [2024-04-18 19:14:59.058620] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 326:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:21:43.391 [2024-04-18 19:14:59.058799] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:21:43.391 [2024-04-18 19:14:59.058932] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:21:43.391 passed 00:21:43.391 00:21:43.391 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.391 suites 1 1 n/a 0 0 00:21:43.391 tests 2 2 2 0 0 00:21:43.391 asserts 107 107 107 0 n/a 00:21:43.391 00:21:43.391 Elapsed time = 0.007 seconds 00:21:43.391 00:21:43.391 real 0m0.049s 00:21:43.391 user 0m0.040s 00:21:43.391 sys 0m0.008s 00:21:43.391 19:14:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.391 19:14:59 -- common/autotest_common.sh@10 -- # set +x 00:21:43.391 ************************************ 00:21:43.391 END TEST unittest_iobuf 00:21:43.391 ************************************ 00:21:43.391 19:14:59 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:21:43.391 19:14:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:43.391 19:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.391 19:14:59 -- common/autotest_common.sh@10 -- # set +x 00:21:43.391 ************************************ 00:21:43.391 START TEST unittest_util 00:21:43.391 ************************************ 00:21:43.391 19:14:59 -- common/autotest_common.sh@1111 -- # unittest_util 00:21:43.391 19:14:59 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:21:43.391 00:21:43.391 00:21:43.391 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.391 http://cunit.sourceforge.net/ 00:21:43.391 00:21:43.391 00:21:43.391 Suite: base64 00:21:43.391 Test: test_base64_get_encoded_strlen ...passed 00:21:43.391 Test: test_base64_get_decoded_len ...passed 00:21:43.391 Test: test_base64_encode ...passed 00:21:43.391 Test: test_base64_decode ...passed 00:21:43.391 Test: test_base64_urlsafe_encode ...passed 00:21:43.391 Test: test_base64_urlsafe_decode ...passed 00:21:43.391 00:21:43.391 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.391 suites 1 1 n/a 0 0 00:21:43.391 tests 6 6 6 0 0 00:21:43.391 asserts 112 112 112 0 n/a 00:21:43.391 00:21:43.391 Elapsed time = 0.000 seconds 00:21:43.391 19:14:59 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:21:43.391 00:21:43.391 00:21:43.391 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.391 http://cunit.sourceforge.net/ 00:21:43.391 00:21:43.391 00:21:43.391 Suite: bit_array 00:21:43.391 Test: test_1bit ...passed 00:21:43.391 Test: test_64bit ...passed 00:21:43.391 Test: test_find ...passed 00:21:43.391 Test: test_resize ...passed 00:21:43.391 Test: test_errors ...passed 00:21:43.391 Test: test_count ...passed 00:21:43.391 Test: test_mask_store_load ...passed 00:21:43.391 Test: test_mask_clear ...passed 00:21:43.391 00:21:43.391 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.391 suites 1 1 n/a 0 0 00:21:43.391 tests 8 8 8 0 0 00:21:43.391 asserts 5075 5075 5075 0 n/a 00:21:43.391 00:21:43.391 Elapsed time = 0.003 seconds 00:21:43.391 19:14:59 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:21:43.391 00:21:43.391 00:21:43.391 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.391 http://cunit.sourceforge.net/ 00:21:43.391 00:21:43.391 00:21:43.391 Suite: cpuset 00:21:43.391 Test: test_cpuset ...passed 00:21:43.391 Test: test_cpuset_parse ...[2024-04-18 19:14:59.259241] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:21:43.391 [2024-04-18 19:14:59.259752] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:21:43.391 [2024-04-18 19:14:59.259992] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:21:43.391 [2024-04-18 19:14:59.260225] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:21:43.391 [2024-04-18 19:14:59.260394] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:21:43.391 [2024-04-18 19:14:59.260540] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:21:43.391 [2024-04-18 19:14:59.260672] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:21:43.391 [2024-04-18 19:14:59.260819] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:21:43.391 passed 00:21:43.391 Test: test_cpuset_fmt ...passed 00:21:43.391 00:21:43.391 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.391 suites 1 1 n/a 0 0 00:21:43.391 tests 3 3 3 0 0 00:21:43.391 asserts 65 65 65 0 n/a 00:21:43.391 00:21:43.391 Elapsed time = 0.003 seconds 00:21:43.391 19:14:59 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:21:43.391 00:21:43.391 00:21:43.391 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.391 http://cunit.sourceforge.net/ 00:21:43.391 00:21:43.391 00:21:43.391 Suite: crc16 00:21:43.391 Test: test_crc16_t10dif ...passed 00:21:43.391 Test: test_crc16_t10dif_seed ...passed 00:21:43.391 Test: test_crc16_t10dif_copy ...passed 00:21:43.391 00:21:43.391 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.391 suites 1 1 n/a 0 0 00:21:43.391 tests 3 3 3 0 0 00:21:43.391 asserts 5 5 5 0 n/a 00:21:43.391 00:21:43.391 Elapsed time = 0.000 seconds 00:21:43.391 19:14:59 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:21:43.649 00:21:43.649 00:21:43.649 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.649 http://cunit.sourceforge.net/ 00:21:43.649 00:21:43.649 00:21:43.649 Suite: crc32_ieee 00:21:43.649 Test: test_crc32_ieee ...passed 00:21:43.649 00:21:43.649 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.649 suites 1 1 n/a 0 0 00:21:43.649 tests 1 1 1 0 0 00:21:43.649 asserts 1 1 1 0 n/a 00:21:43.649 00:21:43.649 Elapsed time = 0.000 seconds 00:21:43.649 19:14:59 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:21:43.649 00:21:43.649 00:21:43.649 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.649 http://cunit.sourceforge.net/ 00:21:43.649 00:21:43.649 00:21:43.649 Suite: crc32c 00:21:43.649 Test: test_crc32c ...passed 00:21:43.649 Test: test_crc32c_nvme ...passed 00:21:43.649 00:21:43.649 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.649 suites 1 1 n/a 0 0 00:21:43.649 tests 2 2 2 0 0 00:21:43.649 asserts 16 16 16 0 n/a 00:21:43.649 00:21:43.649 Elapsed time = 0.000 seconds 00:21:43.649 19:14:59 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:21:43.649 00:21:43.649 00:21:43.649 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.649 http://cunit.sourceforge.net/ 00:21:43.649 00:21:43.649 00:21:43.649 Suite: crc64 00:21:43.649 Test: test_crc64_nvme ...passed 00:21:43.649 00:21:43.649 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.649 suites 1 1 n/a 0 0 00:21:43.649 tests 1 1 1 0 0 00:21:43.649 asserts 4 4 4 0 n/a 00:21:43.649 00:21:43.649 Elapsed time = 0.000 seconds 00:21:43.649 19:14:59 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:21:43.649 00:21:43.649 00:21:43.649 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.649 http://cunit.sourceforge.net/ 00:21:43.649 00:21:43.649 00:21:43.649 Suite: string 00:21:43.649 Test: test_parse_ip_addr ...passed 00:21:43.649 Test: test_str_chomp ...passed 00:21:43.649 Test: test_parse_capacity ...passed 00:21:43.649 Test: test_sprintf_append_realloc ...passed 00:21:43.649 Test: test_strtol ...passed 00:21:43.649 Test: test_strtoll ...passed 00:21:43.649 Test: test_strarray ...passed 00:21:43.649 Test: test_strcpy_replace ...passed 00:21:43.649 00:21:43.649 Run Summary: Type Total Ran Passed Failed Inactive 00:21:43.649 suites 1 1 n/a 0 0 00:21:43.649 tests 8 8 8 0 0 00:21:43.649 asserts 161 161 161 0 n/a 00:21:43.649 00:21:43.649 Elapsed time = 0.001 seconds 00:21:43.649 19:14:59 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:21:43.649 00:21:43.649 00:21:43.649 CUnit - A unit testing framework for C - Version 2.1-3 00:21:43.649 http://cunit.sourceforge.net/ 00:21:43.649 00:21:43.649 00:21:43.649 Suite: dif 00:21:43.649 Test: dif_generate_and_verify_test ...[2024-04-18 19:14:59.472766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:21:43.649 [2024-04-18 19:14:59.473948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:21:43.649 [2024-04-18 19:14:59.474691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:21:43.649 [2024-04-18 19:14:59.475527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:21:43.649 [2024-04-18 19:14:59.476383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:21:43.649 [2024-04-18 19:14:59.477216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:21:43.649 passed 00:21:43.649 Test: dif_disable_check_test ...[2024-04-18 19:14:59.480009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:21:43.649 [2024-04-18 19:14:59.480823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:21:43.649 [2024-04-18 19:14:59.481608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:21:43.649 passed 00:21:43.649 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-04-18 19:14:59.484677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:21:43.649 [2024-04-18 19:14:59.485489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:21:43.649 [2024-04-18 19:14:59.486265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:21:43.649 [2024-04-18 19:14:59.487124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:21:43.649 [2024-04-18 19:14:59.487997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:21:43.649 [2024-04-18 19:14:59.488815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:21:43.649 [2024-04-18 19:14:59.489590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:21:43.649 [2024-04-18 19:14:59.490359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:21:43.649 [2024-04-18 19:14:59.491164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:21:43.649 [2024-04-18 19:14:59.492051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:21:43.649 [2024-04-18 19:14:59.492884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:21:43.649 passed 00:21:43.649 Test: dif_apptag_mask_test ...[2024-04-18 19:14:59.493770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:21:43.649 [2024-04-18 19:14:59.494511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:21:43.649 passed 00:21:43.649 Test: dif_sec_512_md_0_error_test ...[2024-04-18 19:14:59.495190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:21:43.649 passed 00:21:43.649 Test: dif_sec_4096_md_0_error_test ...[2024-04-18 19:14:59.495528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:21:43.649 [2024-04-18 19:14:59.495602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:21:43.649 passed 00:21:43.649 Test: dif_sec_4100_md_128_error_test ...[2024-04-18 19:14:59.495806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:21:43.649 passed[2024-04-18 19:14:59.495911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:21:43.649 00:21:43.649 Test: dif_guard_seed_test ...passed 00:21:43.649 Test: dif_guard_value_test ...passed 00:21:43.649 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:21:43.649 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:21:43.650 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:21:43.650 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:21:43.650 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:21:43.910 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:21:43.910 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:21:43.910 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:21:43.910 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:21:43.910 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:21:43.910 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:21:43.910 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:21:43.910 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:21:43.910 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:21:43.910 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:21:43.910 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:21:43.910 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:21:43.910 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:21:43.910 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-18 19:14:59.614036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4d, Actual=fd4c 00:21:43.910 [2024-04-18 19:14:59.620779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe20, Actual=fe21 00:21:43.910 [2024-04-18 19:14:59.624807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.910 [2024-04-18 19:14:59.626605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.910 [2024-04-18 19:14:59.628599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1005b 00:21:43.910 [2024-04-18 19:14:59.630493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1005b 00:21:43.910 [2024-04-18 19:14:59.632377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=7693 00:21:43.910 [2024-04-18 19:14:59.633523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe21, Actual=5974 00:21:43.910 [2024-04-18 19:14:59.634680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab653ed, Actual=1ab753ed 00:21:43.910 [2024-04-18 19:14:59.636504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38564660, Actual=38574660 00:21:43.910 [2024-04-18 19:14:59.638182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.910 [2024-04-18 19:14:59.639907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.910 [2024-04-18 19:14:59.641815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=100000000005b 00:21:43.910 [2024-04-18 19:14:59.643745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=100000000005b 00:21:43.910 [2024-04-18 19:14:59.645628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=a9339e61 00:21:43.910 [2024-04-18 19:14:59.646767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38574660, Actual=1368f647 00:21:43.910 [2024-04-18 19:14:59.647956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.910 [2024-04-18 19:14:59.649810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:21:43.910 [2024-04-18 19:14:59.651669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.910 [2024-04-18 19:14:59.653449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.910 [2024-04-18 19:14:59.655175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5a 00:21:43.910 [2024-04-18 19:14:59.657010] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5a 00:21:43.911 [2024-04-18 19:14:59.658919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.911 [2024-04-18 19:14:59.660156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a2d4837a266, Actual=68b8eb55e6c7ecb4 00:21:43.911 passed 00:21:43.911 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-04-18 19:14:59.660832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:21:43.911 [2024-04-18 19:14:59.661127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:21:43.911 [2024-04-18 19:14:59.661408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.661679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.662015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.911 [2024-04-18 19:14:59.662322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.911 [2024-04-18 19:14:59.662607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7693 00:21:43.911 [2024-04-18 19:14:59.662777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5974 00:21:43.911 [2024-04-18 19:14:59.662965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:21:43.911 [2024-04-18 19:14:59.663251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:21:43.911 [2024-04-18 19:14:59.663570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.663860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.664118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.911 [2024-04-18 19:14:59.664367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.911 [2024-04-18 19:14:59.664632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a9339e61 00:21:43.911 [2024-04-18 19:14:59.664791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1368f647 00:21:43.911 [2024-04-18 19:14:59.664965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.911 [2024-04-18 19:14:59.665234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:21:43.911 [2024-04-18 19:14:59.665491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.665738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.665993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.911 [2024-04-18 19:14:59.666239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.911 [2024-04-18 19:14:59.666520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.911 [2024-04-18 19:14:59.666687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=68b8eb55e6c7ecb4 00:21:43.911 passed 00:21:43.911 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-04-18 19:14:59.667019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:21:43.911 [2024-04-18 19:14:59.667284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:21:43.911 [2024-04-18 19:14:59.667563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.667835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.668103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.911 [2024-04-18 19:14:59.668371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.911 [2024-04-18 19:14:59.668623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7693 00:21:43.911 [2024-04-18 19:14:59.668792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5974 00:21:43.911 [2024-04-18 19:14:59.668973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:21:43.911 [2024-04-18 19:14:59.669229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:21:43.911 [2024-04-18 19:14:59.669486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.669762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.670066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.911 [2024-04-18 19:14:59.670344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.911 [2024-04-18 19:14:59.670626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a9339e61 00:21:43.911 [2024-04-18 19:14:59.670810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1368f647 00:21:43.911 [2024-04-18 19:14:59.671018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.911 [2024-04-18 19:14:59.671294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:21:43.911 [2024-04-18 19:14:59.671587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.671871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.672145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.911 [2024-04-18 19:14:59.672442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.911 [2024-04-18 19:14:59.672748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.911 [2024-04-18 19:14:59.672963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=68b8eb55e6c7ecb4 00:21:43.911 passed 00:21:43.911 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-04-18 19:14:59.673237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:21:43.911 [2024-04-18 19:14:59.673580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:21:43.911 [2024-04-18 19:14:59.673873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.674147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.674449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.911 [2024-04-18 19:14:59.674729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.911 [2024-04-18 19:14:59.675017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7693 00:21:43.911 [2024-04-18 19:14:59.675207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5974 00:21:43.911 [2024-04-18 19:14:59.675396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:21:43.911 [2024-04-18 19:14:59.675707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:21:43.911 [2024-04-18 19:14:59.676020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.676307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.911 [2024-04-18 19:14:59.676589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.911 [2024-04-18 19:14:59.676872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.911 [2024-04-18 19:14:59.677153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a9339e61 00:21:43.911 [2024-04-18 19:14:59.677316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1368f647 00:21:43.911 [2024-04-18 19:14:59.677481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.911 [2024-04-18 19:14:59.677733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:21:43.912 [2024-04-18 19:14:59.677990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.678248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.678510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.912 [2024-04-18 19:14:59.678765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.912 [2024-04-18 19:14:59.679042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.912 [2024-04-18 19:14:59.679204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=68b8eb55e6c7ecb4 00:21:43.912 passed 00:21:43.912 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-04-18 19:14:59.679540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:21:43.912 [2024-04-18 19:14:59.679833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:21:43.912 [2024-04-18 19:14:59.680118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.680390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.680680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.912 [2024-04-18 19:14:59.680953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.912 [2024-04-18 19:14:59.681216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7693 00:21:43.912 [2024-04-18 19:14:59.681374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5974 00:21:43.912 passed 00:21:43.912 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-04-18 19:14:59.681698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:21:43.912 [2024-04-18 19:14:59.681970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:21:43.912 [2024-04-18 19:14:59.682265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.682519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.682773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.912 [2024-04-18 19:14:59.683033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.912 [2024-04-18 19:14:59.683290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a9339e61 00:21:43.912 [2024-04-18 19:14:59.683473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1368f647 00:21:43.912 [2024-04-18 19:14:59.683689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.912 [2024-04-18 19:14:59.683947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:21:43.912 [2024-04-18 19:14:59.684194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.684458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.684707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.912 [2024-04-18 19:14:59.684959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.912 [2024-04-18 19:14:59.685239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.912 [2024-04-18 19:14:59.685396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=68b8eb55e6c7ecb4 00:21:43.912 passed 00:21:43.912 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-04-18 19:14:59.685684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:21:43.912 [2024-04-18 19:14:59.685944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:21:43.912 [2024-04-18 19:14:59.686196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.686458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.686734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.912 [2024-04-18 19:14:59.686996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:21:43.912 [2024-04-18 19:14:59.687247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7693 00:21:43.912 [2024-04-18 19:14:59.687424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5974 00:21:43.912 passed 00:21:43.912 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-04-18 19:14:59.687739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:21:43.912 [2024-04-18 19:14:59.688006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:21:43.912 [2024-04-18 19:14:59.688286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.688545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.688799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.912 [2024-04-18 19:14:59.689055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:21:43.912 [2024-04-18 19:14:59.689306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a9339e61 00:21:43.912 [2024-04-18 19:14:59.689455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1368f647 00:21:43.912 [2024-04-18 19:14:59.689644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.912 [2024-04-18 19:14:59.689877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:21:43.912 [2024-04-18 19:14:59.690109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.690332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.690570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.912 [2024-04-18 19:14:59.690793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:21:43.912 [2024-04-18 19:14:59.691047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.912 [2024-04-18 19:14:59.691198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=68b8eb55e6c7ecb4 00:21:43.912 passed 00:21:43.912 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:21:43.912 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:21:43.912 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:21:43.912 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:21:43.912 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:21:43.912 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:21:43.912 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:21:43.912 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:21:43.912 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:21:43.912 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-18 19:14:59.722697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4d, Actual=fd4c 00:21:43.912 [2024-04-18 19:14:59.723671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=db90, Actual=db91 00:21:43.912 [2024-04-18 19:14:59.724573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.725451] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.912 [2024-04-18 19:14:59.726274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1005b 00:21:43.912 [2024-04-18 19:14:59.727087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1005b 00:21:43.912 [2024-04-18 19:14:59.727923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=7693 00:21:43.912 [2024-04-18 19:14:59.728745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=c18e 00:21:43.912 [2024-04-18 19:14:59.729554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab653ed, Actual=1ab753ed 00:21:43.913 [2024-04-18 19:14:59.730388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=39a3c5a5, Actual=39a2c5a5 00:21:43.913 [2024-04-18 19:14:59.731205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.732178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.733094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=100000000005b 00:21:43.913 [2024-04-18 19:14:59.734022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=100000000005b 00:21:43.913 [2024-04-18 19:14:59.734849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=a9339e61 00:21:43.913 [2024-04-18 19:14:59.735696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=52b4c1c8 00:21:43.913 [2024-04-18 19:14:59.736587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.913 [2024-04-18 19:14:59.737462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=ffbc6304058234a4, Actual=ffbd6304058234a4 00:21:43.913 [2024-04-18 19:14:59.738285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.739112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.739949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5a 00:21:43.913 [2024-04-18 19:14:59.740778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5a 00:21:43.913 [2024-04-18 19:14:59.741591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.913 [2024-04-18 19:14:59.742541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=480c95f3d8887027 00:21:43.913 passed 00:21:43.913 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-18 19:14:59.743041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4d, Actual=fd4c 00:21:43.913 [2024-04-18 19:14:59.743319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2f8b, Actual=2f8a 00:21:43.913 [2024-04-18 19:14:59.743593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.743876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.744178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10059 00:21:43.913 [2024-04-18 19:14:59.744450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10059 00:21:43.913 [2024-04-18 19:14:59.744695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7693 00:21:43.913 [2024-04-18 19:14:59.744951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=3595 00:21:43.913 [2024-04-18 19:14:59.745186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab653ed, Actual=1ab753ed 00:21:43.913 [2024-04-18 19:14:59.745413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=db95e427, Actual=db94e427 00:21:43.913 [2024-04-18 19:14:59.745638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.745895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.746135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000000059 00:21:43.913 [2024-04-18 19:14:59.746371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000000059 00:21:43.913 [2024-04-18 19:14:59.746612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=a9339e61 00:21:43.913 [2024-04-18 19:14:59.746844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=b082e04a 00:21:43.913 [2024-04-18 19:14:59.747098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.913 [2024-04-18 19:14:59.747329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1f21f7e43abe3f3e, Actual=1f20f7e43abe3f3e 00:21:43.913 [2024-04-18 19:14:59.747588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.747883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.748135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=58 00:21:43.913 [2024-04-18 19:14:59.748378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=58 00:21:43.913 [2024-04-18 19:14:59.748675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.913 [2024-04-18 19:14:59.748946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=a8910113e7b47bbd 00:21:43.913 passed 00:21:43.913 Test: dix_sec_512_md_0_error ...[2024-04-18 19:14:59.749151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:21:43.913 passed 00:21:43.913 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:21:43.913 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:21:43.913 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:21:43.913 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:21:43.913 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:21:43.913 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:21:43.913 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:21:43.913 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:21:43.913 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:21:43.913 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-18 19:14:59.781715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4d, Actual=fd4c 00:21:43.913 [2024-04-18 19:14:59.782659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=db90, Actual=db91 00:21:43.913 [2024-04-18 19:14:59.783561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.784461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.785367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1005b 00:21:43.913 [2024-04-18 19:14:59.786243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1005b 00:21:43.913 [2024-04-18 19:14:59.787101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=7693 00:21:43.913 [2024-04-18 19:14:59.788017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=c18e 00:21:43.913 [2024-04-18 19:14:59.788918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab653ed, Actual=1ab753ed 00:21:43.913 [2024-04-18 19:14:59.789787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=39a3c5a5, Actual=39a2c5a5 00:21:43.913 [2024-04-18 19:14:59.790778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.791776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.792738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=100000000005b 00:21:43.913 [2024-04-18 19:14:59.793702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=100000000005b 00:21:43.913 [2024-04-18 19:14:59.794676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=a9339e61 00:21:43.913 [2024-04-18 19:14:59.795605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=52b4c1c8 00:21:43.913 [2024-04-18 19:14:59.796600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.913 [2024-04-18 19:14:59.797514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=ffbc6304058234a4, Actual=ffbd6304058234a4 00:21:43.913 [2024-04-18 19:14:59.798402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.799284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=89 00:21:43.913 [2024-04-18 19:14:59.800213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5a 00:21:43.913 [2024-04-18 19:14:59.801115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5a 00:21:43.913 [2024-04-18 19:14:59.802030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.913 [2024-04-18 19:14:59.802920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=480c95f3d8887027 00:21:43.913 passed 00:21:43.914 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-18 19:14:59.803415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4d, Actual=fd4c 00:21:43.914 [2024-04-18 19:14:59.803706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2f8b, Actual=2f8a 00:21:43.914 [2024-04-18 19:14:59.803979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.914 [2024-04-18 19:14:59.804250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.914 [2024-04-18 19:14:59.804536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10059 00:21:43.914 [2024-04-18 19:14:59.804808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10059 00:21:43.914 [2024-04-18 19:14:59.805071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7693 00:21:43.914 [2024-04-18 19:14:59.805337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=3595 00:21:43.914 [2024-04-18 19:14:59.805596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab653ed, Actual=1ab753ed 00:21:43.914 [2024-04-18 19:14:59.805857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=db95e427, Actual=db94e427 00:21:43.914 [2024-04-18 19:14:59.806126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.914 [2024-04-18 19:14:59.806425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.914 [2024-04-18 19:14:59.806682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000000059 00:21:43.914 [2024-04-18 19:14:59.806939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000000059 00:21:43.914 [2024-04-18 19:14:59.807205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=a9339e61 00:21:43.914 [2024-04-18 19:14:59.807480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=b082e04a 00:21:43.914 [2024-04-18 19:14:59.807770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:21:43.914 [2024-04-18 19:14:59.808037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1f21f7e43abe3f3e, Actual=1f20f7e43abe3f3e 00:21:43.914 [2024-04-18 19:14:59.808288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.914 [2024-04-18 19:14:59.808559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=89 00:21:43.914 [2024-04-18 19:14:59.808818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=58 00:21:43.914 [2024-04-18 19:14:59.809194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=58 00:21:43.914 [2024-04-18 19:14:59.809591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=e0c3fbeeb2c070a6 00:21:43.914 [2024-04-18 19:14:59.809923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=a8910113e7b47bbd 00:21:43.914 passed 00:21:43.914 Test: set_md_interleave_iovs_test ...passed 00:21:43.914 Test: set_md_interleave_iovs_split_test ...passed 00:21:43.914 Test: dif_generate_stream_pi_16_test ...passed 00:21:43.914 Test: dif_generate_stream_test ...passed 00:21:43.914 Test: set_md_interleave_iovs_alignment_test ...[2024-04-18 19:14:59.816535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:21:43.914 passed 00:21:43.914 Test: dif_generate_split_test ...passed 00:21:43.914 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:21:43.914 Test: dif_verify_split_test ...passed 00:21:43.914 Test: dif_verify_stream_multi_segments_test ...passed 00:21:43.914 Test: update_crc32c_pi_16_test ...passed 00:21:43.914 Test: update_crc32c_test ...passed 00:21:43.914 Test: dif_update_crc32c_split_test ...passed 00:21:43.914 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:21:43.914 Test: get_range_with_md_test ...passed 00:21:43.914 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:21:43.914 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:21:44.172 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:21:44.172 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:21:44.172 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:21:44.172 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:21:44.172 Test: dif_generate_and_verify_unmap_test ...passed 00:21:44.172 00:21:44.172 Run Summary: Type Total Ran Passed Failed Inactive 00:21:44.172 suites 1 1 n/a 0 0 00:21:44.172 tests 79 79 79 0 0 00:21:44.172 asserts 3584 3584 3584 0 n/a 00:21:44.172 00:21:44.172 Elapsed time = 0.358 seconds 00:21:44.172 19:14:59 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:21:44.172 00:21:44.172 00:21:44.172 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.172 http://cunit.sourceforge.net/ 00:21:44.172 00:21:44.172 00:21:44.172 Suite: iov 00:21:44.172 Test: test_single_iov ...passed 00:21:44.172 Test: test_simple_iov ...passed 00:21:44.172 Test: test_complex_iov ...passed 00:21:44.172 Test: test_iovs_to_buf ...passed 00:21:44.172 Test: test_buf_to_iovs ...passed 00:21:44.172 Test: test_memset ...passed 00:21:44.172 Test: test_iov_one ...passed 00:21:44.172 Test: test_iov_xfer ...passed 00:21:44.172 00:21:44.172 Run Summary: Type Total Ran Passed Failed Inactive 00:21:44.172 suites 1 1 n/a 0 0 00:21:44.172 tests 8 8 8 0 0 00:21:44.172 asserts 156 156 156 0 n/a 00:21:44.172 00:21:44.172 Elapsed time = 0.000 seconds 00:21:44.172 19:14:59 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:21:44.172 00:21:44.172 00:21:44.172 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.172 http://cunit.sourceforge.net/ 00:21:44.172 00:21:44.172 00:21:44.172 Suite: math 00:21:44.172 Test: test_serial_number_arithmetic ...passed 00:21:44.172 Suite: erase 00:21:44.172 Test: test_memset_s ...passed 00:21:44.172 00:21:44.172 Run Summary: Type Total Ran Passed Failed Inactive 00:21:44.172 suites 2 2 n/a 0 0 00:21:44.172 tests 2 2 2 0 0 00:21:44.172 asserts 18 18 18 0 n/a 00:21:44.172 00:21:44.172 Elapsed time = 0.000 seconds 00:21:44.172 19:14:59 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:21:44.172 00:21:44.173 00:21:44.173 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.173 http://cunit.sourceforge.net/ 00:21:44.173 00:21:44.173 00:21:44.173 Suite: pipe 00:21:44.173 Test: test_create_destroy ...passed 00:21:44.173 Test: test_write_get_buffer ...passed 00:21:44.173 Test: test_write_advance ...passed 00:21:44.173 Test: test_read_get_buffer ...passed 00:21:44.173 Test: test_read_advance ...passed 00:21:44.173 Test: test_data ...passed 00:21:44.173 00:21:44.173 Run Summary: Type Total Ran Passed Failed Inactive 00:21:44.173 suites 1 1 n/a 0 0 00:21:44.173 tests 6 6 6 0 0 00:21:44.173 asserts 251 251 251 0 n/a 00:21:44.173 00:21:44.173 Elapsed time = 0.000 seconds 00:21:44.173 19:14:59 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:21:44.173 00:21:44.173 00:21:44.173 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.173 http://cunit.sourceforge.net/ 00:21:44.173 00:21:44.173 00:21:44.173 Suite: xor 00:21:44.173 Test: test_xor_gen ...passed 00:21:44.173 00:21:44.173 Run Summary: Type Total Ran Passed Failed Inactive 00:21:44.173 suites 1 1 n/a 0 0 00:21:44.173 tests 1 1 1 0 0 00:21:44.173 asserts 17 17 17 0 n/a 00:21:44.173 00:21:44.173 Elapsed time = 0.007 seconds 00:21:44.173 ************************************ 00:21:44.173 END TEST unittest_util 00:21:44.173 ************************************ 00:21:44.173 00:21:44.173 real 0m0.876s 00:21:44.173 user 0m0.561s 00:21:44.173 sys 0m0.279s 00:21:44.173 19:15:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:44.173 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:21:44.173 19:15:00 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:21:44.173 19:15:00 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:21:44.173 19:15:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:44.173 19:15:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.173 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:21:44.431 ************************************ 00:21:44.431 START TEST unittest_vhost 00:21:44.431 ************************************ 00:21:44.431 19:15:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:21:44.431 00:21:44.431 00:21:44.431 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.431 http://cunit.sourceforge.net/ 00:21:44.431 00:21:44.431 00:21:44.431 Suite: vhost_suite 00:21:44.431 Test: desc_to_iov_test ...[2024-04-18 19:15:00.150943] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:21:44.431 passed 00:21:44.431 Test: create_controller_test ...[2024-04-18 19:15:00.154597] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:21:44.431 [2024-04-18 19:15:00.154798] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:21:44.431 [2024-04-18 19:15:00.154983] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:21:44.431 [2024-04-18 19:15:00.155130] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:21:44.431 [2024-04-18 19:15:00.155197] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:21:44.431 [2024-04-18 19:15:00.155338] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1782:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-04-18 19:15:00.156896] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:21:44.431 passed 00:21:44.431 Test: session_find_by_vid_test ...passed 00:21:44.431 Test: remove_controller_test ...[2024-04-18 19:15:00.159477] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1867:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:21:44.431 passed 00:21:44.431 Test: vq_avail_ring_get_test ...passed 00:21:44.431 Test: vq_packed_ring_test ...passed 00:21:44.431 Test: vhost_blk_construct_test ...passed 00:21:44.431 00:21:44.431 Run Summary: Type Total Ran Passed Failed Inactive 00:21:44.431 suites 1 1 n/a 0 0 00:21:44.431 tests 7 7 7 0 0 00:21:44.431 asserts 147 147 147 0 n/a 00:21:44.431 00:21:44.431 Elapsed time = 0.010 seconds 00:21:44.431 00:21:44.431 real 0m0.058s 00:21:44.431 user 0m0.042s 00:21:44.431 sys 0m0.013s 00:21:44.431 19:15:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:44.431 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:21:44.431 ************************************ 00:21:44.431 END TEST unittest_vhost 00:21:44.431 ************************************ 00:21:44.431 19:15:00 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:21:44.431 19:15:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:44.431 19:15:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.431 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:21:44.431 ************************************ 00:21:44.431 START TEST unittest_dma 00:21:44.431 ************************************ 00:21:44.431 19:15:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:21:44.431 00:21:44.431 00:21:44.431 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.431 http://cunit.sourceforge.net/ 00:21:44.431 00:21:44.431 00:21:44.431 Suite: dma_suite 00:21:44.431 Test: test_dma ...[2024-04-18 19:15:00.292572] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:21:44.431 passed 00:21:44.431 00:21:44.431 Run Summary: Type Total Ran Passed Failed Inactive 00:21:44.431 suites 1 1 n/a 0 0 00:21:44.431 tests 1 1 1 0 0 00:21:44.432 asserts 54 54 54 0 n/a 00:21:44.432 00:21:44.432 Elapsed time = 0.001 seconds 00:21:44.432 00:21:44.432 real 0m0.034s 00:21:44.432 user 0m0.016s 00:21:44.432 sys 0m0.018s 00:21:44.432 19:15:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:44.432 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:21:44.432 ************************************ 00:21:44.432 END TEST unittest_dma 00:21:44.432 ************************************ 00:21:44.432 19:15:00 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:21:44.432 19:15:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:44.432 19:15:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.432 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:21:44.691 ************************************ 00:21:44.691 START TEST unittest_init 00:21:44.691 ************************************ 00:21:44.691 19:15:00 -- common/autotest_common.sh@1111 -- # unittest_init 00:21:44.691 19:15:00 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:21:44.691 00:21:44.691 00:21:44.691 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.691 http://cunit.sourceforge.net/ 00:21:44.691 00:21:44.691 00:21:44.691 Suite: subsystem_suite 00:21:44.691 Test: subsystem_sort_test_depends_on_single ...passed 00:21:44.691 Test: subsystem_sort_test_depends_on_multiple ...passed 00:21:44.691 Test: subsystem_sort_test_missing_dependency ...[2024-04-18 19:15:00.420084] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:21:44.691 [2024-04-18 19:15:00.420544] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:21:44.691 passed 00:21:44.691 00:21:44.691 Run Summary: Type Total Ran Passed Failed Inactive 00:21:44.691 suites 1 1 n/a 0 0 00:21:44.691 tests 3 3 3 0 0 00:21:44.691 asserts 20 20 20 0 n/a 00:21:44.691 00:21:44.691 Elapsed time = 0.001 seconds 00:21:44.691 00:21:44.691 real 0m0.042s 00:21:44.691 user 0m0.022s 00:21:44.691 sys 0m0.019s 00:21:44.691 19:15:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:44.691 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:21:44.691 ************************************ 00:21:44.691 END TEST unittest_init 00:21:44.691 ************************************ 00:21:44.691 19:15:00 -- unit/unittest.sh@288 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:21:44.691 19:15:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:44.691 19:15:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.691 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:21:44.691 ************************************ 00:21:44.691 START TEST unittest_keyring 00:21:44.691 ************************************ 00:21:44.691 19:15:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:21:44.691 00:21:44.691 00:21:44.691 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.691 http://cunit.sourceforge.net/ 00:21:44.691 00:21:44.691 00:21:44.691 Suite: keyring 00:21:44.691 Test: test_keyring_add_remove ...[2024-04-18 19:15:00.553176] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:21:44.691 [2024-04-18 19:15:00.553593] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:21:44.691 [2024-04-18 19:15:00.553715] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:44.691 passed 00:21:44.691 Test: test_keyring_get_put ...passed 00:21:44.691 00:21:44.691 Run Summary: Type Total Ran Passed Failed Inactive 00:21:44.691 suites 1 1 n/a 0 0 00:21:44.691 tests 2 2 2 0 0 00:21:44.691 asserts 44 44 44 0 n/a 00:21:44.691 00:21:44.691 Elapsed time = 0.001 seconds 00:21:44.691 00:21:44.691 real 0m0.037s 00:21:44.691 user 0m0.020s 00:21:44.691 sys 0m0.016s 00:21:44.691 19:15:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:44.691 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:21:44.691 ************************************ 00:21:44.691 END TEST unittest_keyring 00:21:44.691 ************************************ 00:21:44.691 19:15:00 -- unit/unittest.sh@290 -- # '[' yes = yes ']' 00:21:44.691 19:15:00 -- unit/unittest.sh@290 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:44.691 19:15:00 -- unit/unittest.sh@291 -- # hostname 00:21:44.691 19:15:00 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:21:44.950 geninfo: WARNING: invalid characters removed from testname! 00:22:17.013 19:15:30 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:22:20.351 19:15:35 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:22:22.943 19:15:38 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:22:26.283 19:15:41 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:22:28.812 19:15:44 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:22:31.337 19:15:47 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:22:34.616 19:15:49 -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:22:36.515 19:15:52 -- unit/unittest.sh@299 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:22:36.515 19:15:52 -- unit/unittest.sh@300 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:22:37.455 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:22:37.455 Found 319 entries. 00:22:37.455 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:22:37.455 Writing .css and .png files. 00:22:37.455 Generating output. 00:22:37.455 Processing file include/linux/virtio_ring.h 00:22:37.713 Processing file include/spdk/util.h 00:22:37.713 Processing file include/spdk/thread.h 00:22:37.713 Processing file include/spdk/endian.h 00:22:37.713 Processing file include/spdk/histogram_data.h 00:22:37.713 Processing file include/spdk/nvme_spec.h 00:22:37.713 Processing file include/spdk/nvme.h 00:22:37.713 Processing file include/spdk/base64.h 00:22:37.713 Processing file include/spdk/trace.h 00:22:37.713 Processing file include/spdk/nvmf_transport.h 00:22:37.713 Processing file include/spdk/bdev_module.h 00:22:37.713 Processing file include/spdk/mmio.h 00:22:37.971 Processing file include/spdk_internal/nvme_tcp.h 00:22:37.971 Processing file include/spdk_internal/sock.h 00:22:37.971 Processing file include/spdk_internal/utf.h 00:22:37.971 Processing file include/spdk_internal/virtio.h 00:22:37.971 Processing file include/spdk_internal/sgl.h 00:22:37.971 Processing file include/spdk_internal/rdma.h 00:22:37.971 Processing file lib/accel/accel.c 00:22:37.971 Processing file lib/accel/accel_sw.c 00:22:37.971 Processing file lib/accel/accel_rpc.c 00:22:38.229 Processing file lib/bdev/bdev_rpc.c 00:22:38.229 Processing file lib/bdev/scsi_nvme.c 00:22:38.229 Processing file lib/bdev/bdev.c 00:22:38.229 Processing file lib/bdev/part.c 00:22:38.229 Processing file lib/bdev/bdev_zone.c 00:22:38.486 Processing file lib/blob/blobstore.c 00:22:38.487 Processing file lib/blob/blobstore.h 00:22:38.487 Processing file lib/blob/request.c 00:22:38.487 Processing file lib/blob/zeroes.c 00:22:38.487 Processing file lib/blob/blob_bs_dev.c 00:22:38.745 Processing file lib/blobfs/tree.c 00:22:38.745 Processing file lib/blobfs/blobfs.c 00:22:38.745 Processing file lib/conf/conf.c 00:22:38.745 Processing file lib/dma/dma.c 00:22:39.002 Processing file lib/env_dpdk/pci_ioat.c 00:22:39.002 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:22:39.002 Processing file lib/env_dpdk/pci_event.c 00:22:39.002 Processing file lib/env_dpdk/memory.c 00:22:39.002 Processing file lib/env_dpdk/pci_vmd.c 00:22:39.002 Processing file lib/env_dpdk/pci_dpdk.c 00:22:39.002 Processing file lib/env_dpdk/sigbus_handler.c 00:22:39.002 Processing file lib/env_dpdk/threads.c 00:22:39.002 Processing file lib/env_dpdk/init.c 00:22:39.002 Processing file lib/env_dpdk/pci_idxd.c 00:22:39.002 Processing file lib/env_dpdk/pci.c 00:22:39.002 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:22:39.002 Processing file lib/env_dpdk/pci_virtio.c 00:22:39.002 Processing file lib/env_dpdk/env.c 00:22:39.260 Processing file lib/event/scheduler_static.c 00:22:39.260 Processing file lib/event/app.c 00:22:39.260 Processing file lib/event/reactor.c 00:22:39.260 Processing file lib/event/log_rpc.c 00:22:39.260 Processing file lib/event/app_rpc.c 00:22:39.519 Processing file lib/ftl/ftl_layout.c 00:22:39.520 Processing file lib/ftl/ftl_writer.h 00:22:39.520 Processing file lib/ftl/ftl_p2l.c 00:22:39.520 Processing file lib/ftl/ftl_nv_cache_io.h 00:22:39.520 Processing file lib/ftl/ftl_band.h 00:22:39.520 Processing file lib/ftl/ftl_band_ops.c 00:22:39.520 Processing file lib/ftl/ftl_io.h 00:22:39.520 Processing file lib/ftl/ftl_l2p_flat.c 00:22:39.520 Processing file lib/ftl/ftl_reloc.c 00:22:39.520 Processing file lib/ftl/ftl_trace.c 00:22:39.520 Processing file lib/ftl/ftl_io.c 00:22:39.520 Processing file lib/ftl/ftl_sb.c 00:22:39.520 Processing file lib/ftl/ftl_init.c 00:22:39.520 Processing file lib/ftl/ftl_debug.c 00:22:39.520 Processing file lib/ftl/ftl_core.c 00:22:39.520 Processing file lib/ftl/ftl_rq.c 00:22:39.520 Processing file lib/ftl/ftl_band.c 00:22:39.520 Processing file lib/ftl/ftl_writer.c 00:22:39.520 Processing file lib/ftl/ftl_nv_cache.h 00:22:39.520 Processing file lib/ftl/ftl_l2p_cache.c 00:22:39.520 Processing file lib/ftl/ftl_l2p.c 00:22:39.520 Processing file lib/ftl/ftl_nv_cache.c 00:22:39.520 Processing file lib/ftl/ftl_debug.h 00:22:39.520 Processing file lib/ftl/ftl_core.h 00:22:39.811 Processing file lib/ftl/base/ftl_base_bdev.c 00:22:39.811 Processing file lib/ftl/base/ftl_base_dev.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:22:39.811 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:22:40.070 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:22:40.070 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:22:40.070 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:22:40.070 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:22:40.070 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:22:40.070 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:22:40.331 Processing file lib/ftl/utils/ftl_mempool.c 00:22:40.331 Processing file lib/ftl/utils/ftl_property.h 00:22:40.331 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:22:40.331 Processing file lib/ftl/utils/ftl_conf.c 00:22:40.331 Processing file lib/ftl/utils/ftl_md.c 00:22:40.331 Processing file lib/ftl/utils/ftl_property.c 00:22:40.331 Processing file lib/ftl/utils/ftl_bitmap.c 00:22:40.331 Processing file lib/ftl/utils/ftl_addr_utils.h 00:22:40.331 Processing file lib/ftl/utils/ftl_df.h 00:22:40.331 Processing file lib/idxd/idxd_internal.h 00:22:40.331 Processing file lib/idxd/idxd_user.c 00:22:40.331 Processing file lib/idxd/idxd.c 00:22:40.589 Processing file lib/init/rpc.c 00:22:40.589 Processing file lib/init/json_config.c 00:22:40.589 Processing file lib/init/subsystem_rpc.c 00:22:40.590 Processing file lib/init/subsystem.c 00:22:40.590 Processing file lib/ioat/ioat_internal.h 00:22:40.590 Processing file lib/ioat/ioat.c 00:22:40.849 Processing file lib/iscsi/task.c 00:22:40.849 Processing file lib/iscsi/portal_grp.c 00:22:40.849 Processing file lib/iscsi/md5.c 00:22:40.849 Processing file lib/iscsi/iscsi.h 00:22:40.849 Processing file lib/iscsi/tgt_node.c 00:22:40.849 Processing file lib/iscsi/iscsi_rpc.c 00:22:40.849 Processing file lib/iscsi/iscsi.c 00:22:40.849 Processing file lib/iscsi/task.h 00:22:40.849 Processing file lib/iscsi/iscsi_subsystem.c 00:22:40.849 Processing file lib/iscsi/conn.c 00:22:40.849 Processing file lib/iscsi/param.c 00:22:40.849 Processing file lib/iscsi/init_grp.c 00:22:41.107 Processing file lib/json/json_write.c 00:22:41.107 Processing file lib/json/json_parse.c 00:22:41.107 Processing file lib/json/json_util.c 00:22:41.107 Processing file lib/jsonrpc/jsonrpc_server.c 00:22:41.107 Processing file lib/jsonrpc/jsonrpc_client.c 00:22:41.107 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:22:41.107 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:22:41.367 Processing file lib/keyring/keyring_rpc.c 00:22:41.367 Processing file lib/keyring/keyring.c 00:22:41.367 Processing file lib/log/log_flags.c 00:22:41.367 Processing file lib/log/log.c 00:22:41.367 Processing file lib/log/log_deprecated.c 00:22:41.367 Processing file lib/lvol/lvol.c 00:22:41.625 Processing file lib/nbd/nbd_rpc.c 00:22:41.625 Processing file lib/nbd/nbd.c 00:22:41.625 Processing file lib/notify/notify.c 00:22:41.625 Processing file lib/notify/notify_rpc.c 00:22:42.561 Processing file lib/nvme/nvme.c 00:22:42.561 Processing file lib/nvme/nvme_quirks.c 00:22:42.561 Processing file lib/nvme/nvme_pcie_common.c 00:22:42.561 Processing file lib/nvme/nvme_fabric.c 00:22:42.561 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:22:42.561 Processing file lib/nvme/nvme_internal.h 00:22:42.561 Processing file lib/nvme/nvme_zns.c 00:22:42.561 Processing file lib/nvme/nvme_pcie_internal.h 00:22:42.561 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:22:42.561 Processing file lib/nvme/nvme_auth.c 00:22:42.561 Processing file lib/nvme/nvme_pcie.c 00:22:42.561 Processing file lib/nvme/nvme_io_msg.c 00:22:42.561 Processing file lib/nvme/nvme_poll_group.c 00:22:42.561 Processing file lib/nvme/nvme_ctrlr.c 00:22:42.561 Processing file lib/nvme/nvme_opal.c 00:22:42.561 Processing file lib/nvme/nvme_cuse.c 00:22:42.561 Processing file lib/nvme/nvme_ns.c 00:22:42.561 Processing file lib/nvme/nvme_rdma.c 00:22:42.561 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:22:42.561 Processing file lib/nvme/nvme_tcp.c 00:22:42.561 Processing file lib/nvme/nvme_ns_cmd.c 00:22:42.561 Processing file lib/nvme/nvme_qpair.c 00:22:42.561 Processing file lib/nvme/nvme_stubs.c 00:22:42.561 Processing file lib/nvme/nvme_transport.c 00:22:42.561 Processing file lib/nvme/nvme_discovery.c 00:22:43.127 Processing file lib/nvmf/nvmf.c 00:22:43.127 Processing file lib/nvmf/nvmf_internal.h 00:22:43.127 Processing file lib/nvmf/ctrlr.c 00:22:43.127 Processing file lib/nvmf/transport.c 00:22:43.127 Processing file lib/nvmf/nvmf_rpc.c 00:22:43.127 Processing file lib/nvmf/ctrlr_bdev.c 00:22:43.127 Processing file lib/nvmf/stubs.c 00:22:43.127 Processing file lib/nvmf/rdma.c 00:22:43.127 Processing file lib/nvmf/ctrlr_discovery.c 00:22:43.127 Processing file lib/nvmf/subsystem.c 00:22:43.127 Processing file lib/nvmf/tcp.c 00:22:43.127 Processing file lib/nvmf/auth.c 00:22:43.127 Processing file lib/rdma/rdma_verbs.c 00:22:43.127 Processing file lib/rdma/common.c 00:22:43.127 Processing file lib/rpc/rpc.c 00:22:43.384 Processing file lib/scsi/lun.c 00:22:43.384 Processing file lib/scsi/scsi.c 00:22:43.384 Processing file lib/scsi/dev.c 00:22:43.384 Processing file lib/scsi/scsi_bdev.c 00:22:43.384 Processing file lib/scsi/scsi_pr.c 00:22:43.384 Processing file lib/scsi/task.c 00:22:43.384 Processing file lib/scsi/scsi_rpc.c 00:22:43.384 Processing file lib/scsi/port.c 00:22:43.384 Processing file lib/sock/sock.c 00:22:43.384 Processing file lib/sock/sock_rpc.c 00:22:43.642 Processing file lib/thread/thread.c 00:22:43.642 Processing file lib/thread/iobuf.c 00:22:43.642 Processing file lib/trace/trace.c 00:22:43.642 Processing file lib/trace/trace_flags.c 00:22:43.642 Processing file lib/trace/trace_rpc.c 00:22:43.900 Processing file lib/trace_parser/trace.cpp 00:22:43.900 Processing file lib/ut/ut.c 00:22:43.900 Processing file lib/ut_mock/mock.c 00:22:44.158 Processing file lib/util/crc16.c 00:22:44.158 Processing file lib/util/crc32c.c 00:22:44.158 Processing file lib/util/base64.c 00:22:44.158 Processing file lib/util/dif.c 00:22:44.158 Processing file lib/util/fd.c 00:22:44.158 Processing file lib/util/iov.c 00:22:44.158 Processing file lib/util/pipe.c 00:22:44.158 Processing file lib/util/strerror_tls.c 00:22:44.158 Processing file lib/util/fd_group.c 00:22:44.158 Processing file lib/util/crc32_ieee.c 00:22:44.158 Processing file lib/util/uuid.c 00:22:44.158 Processing file lib/util/string.c 00:22:44.158 Processing file lib/util/bit_array.c 00:22:44.158 Processing file lib/util/zipf.c 00:22:44.158 Processing file lib/util/cpuset.c 00:22:44.158 Processing file lib/util/xor.c 00:22:44.158 Processing file lib/util/crc32.c 00:22:44.158 Processing file lib/util/crc64.c 00:22:44.158 Processing file lib/util/math.c 00:22:44.158 Processing file lib/util/hexlify.c 00:22:44.158 Processing file lib/util/file.c 00:22:44.416 Processing file lib/vfio_user/host/vfio_user_pci.c 00:22:44.416 Processing file lib/vfio_user/host/vfio_user.c 00:22:44.728 Processing file lib/vhost/vhost_internal.h 00:22:44.728 Processing file lib/vhost/vhost_blk.c 00:22:44.728 Processing file lib/vhost/rte_vhost_user.c 00:22:44.728 Processing file lib/vhost/vhost.c 00:22:44.728 Processing file lib/vhost/vhost_rpc.c 00:22:44.728 Processing file lib/vhost/vhost_scsi.c 00:22:44.728 Processing file lib/virtio/virtio_vhost_user.c 00:22:44.728 Processing file lib/virtio/virtio_pci.c 00:22:44.728 Processing file lib/virtio/virtio_vfio_user.c 00:22:44.728 Processing file lib/virtio/virtio.c 00:22:44.728 Processing file lib/vmd/led.c 00:22:44.728 Processing file lib/vmd/vmd.c 00:22:44.987 Processing file module/accel/dsa/accel_dsa_rpc.c 00:22:44.987 Processing file module/accel/dsa/accel_dsa.c 00:22:44.987 Processing file module/accel/error/accel_error_rpc.c 00:22:44.987 Processing file module/accel/error/accel_error.c 00:22:44.987 Processing file module/accel/iaa/accel_iaa_rpc.c 00:22:44.987 Processing file module/accel/iaa/accel_iaa.c 00:22:45.244 Processing file module/accel/ioat/accel_ioat_rpc.c 00:22:45.244 Processing file module/accel/ioat/accel_ioat.c 00:22:45.244 Processing file module/bdev/aio/bdev_aio.c 00:22:45.244 Processing file module/bdev/aio/bdev_aio_rpc.c 00:22:45.244 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:22:45.244 Processing file module/bdev/delay/vbdev_delay.c 00:22:45.502 Processing file module/bdev/error/vbdev_error.c 00:22:45.502 Processing file module/bdev/error/vbdev_error_rpc.c 00:22:45.502 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:22:45.502 Processing file module/bdev/ftl/bdev_ftl.c 00:22:45.502 Processing file module/bdev/gpt/gpt.c 00:22:45.502 Processing file module/bdev/gpt/gpt.h 00:22:45.502 Processing file module/bdev/gpt/vbdev_gpt.c 00:22:45.759 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:22:45.759 Processing file module/bdev/iscsi/bdev_iscsi.c 00:22:45.759 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:22:45.759 Processing file module/bdev/lvol/vbdev_lvol.c 00:22:46.017 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:22:46.017 Processing file module/bdev/malloc/bdev_malloc.c 00:22:46.017 Processing file module/bdev/null/bdev_null_rpc.c 00:22:46.017 Processing file module/bdev/null/bdev_null.c 00:22:46.275 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:22:46.275 Processing file module/bdev/nvme/bdev_mdns_client.c 00:22:46.275 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:22:46.275 Processing file module/bdev/nvme/bdev_nvme.c 00:22:46.275 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:22:46.275 Processing file module/bdev/nvme/nvme_rpc.c 00:22:46.275 Processing file module/bdev/nvme/vbdev_opal.c 00:22:46.533 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:22:46.533 Processing file module/bdev/passthru/vbdev_passthru.c 00:22:46.791 Processing file module/bdev/raid/bdev_raid.c 00:22:46.791 Processing file module/bdev/raid/concat.c 00:22:46.791 Processing file module/bdev/raid/raid1.c 00:22:46.791 Processing file module/bdev/raid/bdev_raid.h 00:22:46.791 Processing file module/bdev/raid/raid5f.c 00:22:46.791 Processing file module/bdev/raid/raid0.c 00:22:46.791 Processing file module/bdev/raid/bdev_raid_rpc.c 00:22:46.791 Processing file module/bdev/raid/bdev_raid_sb.c 00:22:46.791 Processing file module/bdev/split/vbdev_split_rpc.c 00:22:46.791 Processing file module/bdev/split/vbdev_split.c 00:22:47.049 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:22:47.049 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:22:47.049 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:22:47.049 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:22:47.049 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:22:47.306 Processing file module/blob/bdev/blob_bdev.c 00:22:47.306 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:22:47.306 Processing file module/blobfs/bdev/blobfs_bdev.c 00:22:47.306 Processing file module/env_dpdk/env_dpdk_rpc.c 00:22:47.306 Processing file module/event/subsystems/accel/accel.c 00:22:47.563 Processing file module/event/subsystems/bdev/bdev.c 00:22:47.563 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:22:47.563 Processing file module/event/subsystems/iobuf/iobuf.c 00:22:47.563 Processing file module/event/subsystems/iscsi/iscsi.c 00:22:47.563 Processing file module/event/subsystems/keyring/keyring.c 00:22:47.820 Processing file module/event/subsystems/nbd/nbd.c 00:22:47.820 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:22:47.820 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:22:47.820 Processing file module/event/subsystems/scheduler/scheduler.c 00:22:48.078 Processing file module/event/subsystems/scsi/scsi.c 00:22:48.078 Processing file module/event/subsystems/sock/sock.c 00:22:48.078 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:22:48.078 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:22:48.337 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:22:48.337 Processing file module/event/subsystems/vmd/vmd.c 00:22:48.337 Processing file module/keyring/file/keyring_rpc.c 00:22:48.337 Processing file module/keyring/file/keyring.c 00:22:48.337 Processing file module/keyring/linux/keyring_rpc.c 00:22:48.337 Processing file module/keyring/linux/keyring.c 00:22:48.595 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:22:48.595 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:22:48.595 Processing file module/scheduler/gscheduler/gscheduler.c 00:22:48.595 Processing file module/sock/sock_kernel.h 00:22:48.854 Processing file module/sock/posix/posix.c 00:22:48.854 Writing directory view page. 00:22:48.854 Overall coverage rate: 00:22:48.854 lines......: 39.0% (39940 of 102408 lines) 00:22:48.854 functions..: 42.6% (3653 of 8574 functions) 00:22:48.854 00:22:48.854 00:22:48.854 ===================== 00:22:48.854 All unit tests passed 00:22:48.854 ===================== 00:22:48.854 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:22:48.854 19:16:04 -- unit/unittest.sh@303 -- # set +x 00:22:48.854 00:22:48.854 00:22:48.854 ************************************ 00:22:48.854 END TEST unittest 00:22:48.854 ************************************ 00:22:48.854 00:22:48.854 real 3m31.086s 00:22:48.854 user 3m1.605s 00:22:48.854 sys 0m19.328s 00:22:48.854 19:16:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:48.854 19:16:04 -- common/autotest_common.sh@10 -- # set +x 00:22:48.854 19:16:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:22:48.854 19:16:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:22:48.854 19:16:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:22:48.854 19:16:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:22:48.854 19:16:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:48.854 19:16:04 -- common/autotest_common.sh@10 -- # set +x 00:22:48.854 19:16:04 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:48.854 19:16:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:48.854 19:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:48.854 19:16:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.113 ************************************ 00:22:49.113 START TEST env 00:22:49.113 ************************************ 00:22:49.113 19:16:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:49.113 * Looking for test storage... 00:22:49.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:22:49.113 19:16:04 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:49.113 19:16:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:49.113 19:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:49.113 19:16:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.113 ************************************ 00:22:49.113 START TEST env_memory 00:22:49.113 ************************************ 00:22:49.113 19:16:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:49.113 00:22:49.113 00:22:49.113 CUnit - A unit testing framework for C - Version 2.1-3 00:22:49.113 http://cunit.sourceforge.net/ 00:22:49.113 00:22:49.113 00:22:49.113 Suite: memory 00:22:49.113 Test: alloc and free memory map ...[2024-04-18 19:16:05.009506] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:22:49.113 passed 00:22:49.371 Test: mem map translation ...[2024-04-18 19:16:05.045955] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:22:49.371 [2024-04-18 19:16:05.046196] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:22:49.371 [2024-04-18 19:16:05.046308] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:22:49.371 [2024-04-18 19:16:05.046419] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:22:49.371 passed 00:22:49.371 Test: mem map registration ...[2024-04-18 19:16:05.102709] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:22:49.371 [2024-04-18 19:16:05.102921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:22:49.371 passed 00:22:49.371 Test: mem map adjacent registrations ...passed 00:22:49.371 00:22:49.371 Run Summary: Type Total Ran Passed Failed Inactive 00:22:49.371 suites 1 1 n/a 0 0 00:22:49.371 tests 4 4 4 0 0 00:22:49.371 asserts 152 152 152 0 n/a 00:22:49.371 00:22:49.371 Elapsed time = 0.203 seconds 00:22:49.371 00:22:49.371 real 0m0.244s 00:22:49.371 user 0m0.218s 00:22:49.371 sys 0m0.025s 00:22:49.371 19:16:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:49.371 19:16:05 -- common/autotest_common.sh@10 -- # set +x 00:22:49.371 ************************************ 00:22:49.371 END TEST env_memory 00:22:49.371 ************************************ 00:22:49.371 19:16:05 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:49.371 19:16:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:49.371 19:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:49.371 19:16:05 -- common/autotest_common.sh@10 -- # set +x 00:22:49.629 ************************************ 00:22:49.629 START TEST env_vtophys 00:22:49.629 ************************************ 00:22:49.629 19:16:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:49.629 EAL: lib.eal log level changed from notice to debug 00:22:49.629 EAL: Detected lcore 0 as core 0 on socket 0 00:22:49.629 EAL: Detected lcore 1 as core 0 on socket 0 00:22:49.629 EAL: Detected lcore 2 as core 0 on socket 0 00:22:49.629 EAL: Detected lcore 3 as core 0 on socket 0 00:22:49.629 EAL: Detected lcore 4 as core 0 on socket 0 00:22:49.629 EAL: Detected lcore 5 as core 0 on socket 0 00:22:49.629 EAL: Detected lcore 6 as core 0 on socket 0 00:22:49.629 EAL: Detected lcore 7 as core 0 on socket 0 00:22:49.629 EAL: Detected lcore 8 as core 0 on socket 0 00:22:49.629 EAL: Detected lcore 9 as core 0 on socket 0 00:22:49.629 EAL: Maximum logical cores by configuration: 128 00:22:49.629 EAL: Detected CPU lcores: 10 00:22:49.629 EAL: Detected NUMA nodes: 1 00:22:49.629 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:22:49.629 EAL: Checking presence of .so 'librte_eal.so.24' 00:22:49.629 EAL: Checking presence of .so 'librte_eal.so' 00:22:49.629 EAL: Detected static linkage of DPDK 00:22:49.629 EAL: No shared files mode enabled, IPC will be disabled 00:22:49.629 EAL: Selected IOVA mode 'PA' 00:22:49.629 EAL: Probing VFIO support... 00:22:49.630 EAL: IOMMU type 1 (Type 1) is supported 00:22:49.630 EAL: IOMMU type 7 (sPAPR) is not supported 00:22:49.630 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:22:49.630 EAL: VFIO support initialized 00:22:49.630 EAL: Ask a virtual area of 0x2e000 bytes 00:22:49.630 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:22:49.630 EAL: Setting up physically contiguous memory... 00:22:49.630 EAL: Setting maximum number of open files to 1048576 00:22:49.630 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:22:49.630 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:22:49.630 EAL: Ask a virtual area of 0x61000 bytes 00:22:49.630 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:22:49.630 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:49.630 EAL: Ask a virtual area of 0x400000000 bytes 00:22:49.630 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:22:49.630 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:22:49.630 EAL: Ask a virtual area of 0x61000 bytes 00:22:49.630 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:22:49.630 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:49.630 EAL: Ask a virtual area of 0x400000000 bytes 00:22:49.630 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:22:49.630 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:22:49.630 EAL: Ask a virtual area of 0x61000 bytes 00:22:49.630 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:22:49.630 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:49.630 EAL: Ask a virtual area of 0x400000000 bytes 00:22:49.630 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:22:49.630 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:22:49.630 EAL: Ask a virtual area of 0x61000 bytes 00:22:49.630 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:22:49.630 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:49.630 EAL: Ask a virtual area of 0x400000000 bytes 00:22:49.630 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:22:49.630 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:22:49.630 EAL: Hugepages will be freed exactly as allocated. 00:22:49.630 EAL: No shared files mode enabled, IPC is disabled 00:22:49.630 EAL: No shared files mode enabled, IPC is disabled 00:22:49.630 EAL: TSC frequency is ~2100000 KHz 00:22:49.630 EAL: Main lcore 0 is ready (tid=7f672743aa40;cpuset=[0]) 00:22:49.630 EAL: Trying to obtain current memory policy. 00:22:49.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:49.630 EAL: Restoring previous memory policy: 0 00:22:49.630 EAL: request: mp_malloc_sync 00:22:49.630 EAL: No shared files mode enabled, IPC is disabled 00:22:49.630 EAL: Heap on socket 0 was expanded by 2MB 00:22:49.630 EAL: No shared files mode enabled, IPC is disabled 00:22:49.630 EAL: Mem event callback 'spdk:(nil)' registered 00:22:49.630 00:22:49.630 00:22:49.630 CUnit - A unit testing framework for C - Version 2.1-3 00:22:49.630 http://cunit.sourceforge.net/ 00:22:49.630 00:22:49.630 00:22:49.630 Suite: components_suite 00:22:50.195 Test: vtophys_malloc_test ...passed 00:22:50.195 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:22:50.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:50.195 EAL: Restoring previous memory policy: 0 00:22:50.195 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.195 EAL: request: mp_malloc_sync 00:22:50.195 EAL: No shared files mode enabled, IPC is disabled 00:22:50.195 EAL: Heap on socket 0 was expanded by 4MB 00:22:50.195 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.195 EAL: request: mp_malloc_sync 00:22:50.195 EAL: No shared files mode enabled, IPC is disabled 00:22:50.195 EAL: Heap on socket 0 was shrunk by 4MB 00:22:50.195 EAL: Trying to obtain current memory policy. 00:22:50.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:50.195 EAL: Restoring previous memory policy: 0 00:22:50.195 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.195 EAL: request: mp_malloc_sync 00:22:50.195 EAL: No shared files mode enabled, IPC is disabled 00:22:50.195 EAL: Heap on socket 0 was expanded by 6MB 00:22:50.195 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.195 EAL: request: mp_malloc_sync 00:22:50.195 EAL: No shared files mode enabled, IPC is disabled 00:22:50.195 EAL: Heap on socket 0 was shrunk by 6MB 00:22:50.195 EAL: Trying to obtain current memory policy. 00:22:50.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:50.195 EAL: Restoring previous memory policy: 0 00:22:50.195 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.195 EAL: request: mp_malloc_sync 00:22:50.195 EAL: No shared files mode enabled, IPC is disabled 00:22:50.195 EAL: Heap on socket 0 was expanded by 10MB 00:22:50.195 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.195 EAL: request: mp_malloc_sync 00:22:50.195 EAL: No shared files mode enabled, IPC is disabled 00:22:50.195 EAL: Heap on socket 0 was shrunk by 10MB 00:22:50.453 EAL: Trying to obtain current memory policy. 00:22:50.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:50.453 EAL: Restoring previous memory policy: 0 00:22:50.453 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.453 EAL: request: mp_malloc_sync 00:22:50.453 EAL: No shared files mode enabled, IPC is disabled 00:22:50.453 EAL: Heap on socket 0 was expanded by 18MB 00:22:50.453 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.453 EAL: request: mp_malloc_sync 00:22:50.453 EAL: No shared files mode enabled, IPC is disabled 00:22:50.453 EAL: Heap on socket 0 was shrunk by 18MB 00:22:50.454 EAL: Trying to obtain current memory policy. 00:22:50.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:50.454 EAL: Restoring previous memory policy: 0 00:22:50.454 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.454 EAL: request: mp_malloc_sync 00:22:50.454 EAL: No shared files mode enabled, IPC is disabled 00:22:50.454 EAL: Heap on socket 0 was expanded by 34MB 00:22:50.454 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.454 EAL: request: mp_malloc_sync 00:22:50.454 EAL: No shared files mode enabled, IPC is disabled 00:22:50.454 EAL: Heap on socket 0 was shrunk by 34MB 00:22:50.454 EAL: Trying to obtain current memory policy. 00:22:50.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:50.712 EAL: Restoring previous memory policy: 0 00:22:50.712 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.712 EAL: request: mp_malloc_sync 00:22:50.712 EAL: No shared files mode enabled, IPC is disabled 00:22:50.712 EAL: Heap on socket 0 was expanded by 66MB 00:22:50.712 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.712 EAL: request: mp_malloc_sync 00:22:50.712 EAL: No shared files mode enabled, IPC is disabled 00:22:50.712 EAL: Heap on socket 0 was shrunk by 66MB 00:22:50.970 EAL: Trying to obtain current memory policy. 00:22:50.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:50.970 EAL: Restoring previous memory policy: 0 00:22:50.970 EAL: Calling mem event callback 'spdk:(nil)' 00:22:50.970 EAL: request: mp_malloc_sync 00:22:50.970 EAL: No shared files mode enabled, IPC is disabled 00:22:50.970 EAL: Heap on socket 0 was expanded by 130MB 00:22:51.228 EAL: Calling mem event callback 'spdk:(nil)' 00:22:51.228 EAL: request: mp_malloc_sync 00:22:51.228 EAL: No shared files mode enabled, IPC is disabled 00:22:51.228 EAL: Heap on socket 0 was shrunk by 130MB 00:22:51.487 EAL: Trying to obtain current memory policy. 00:22:51.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:51.487 EAL: Restoring previous memory policy: 0 00:22:51.487 EAL: Calling mem event callback 'spdk:(nil)' 00:22:51.487 EAL: request: mp_malloc_sync 00:22:51.487 EAL: No shared files mode enabled, IPC is disabled 00:22:51.487 EAL: Heap on socket 0 was expanded by 258MB 00:22:52.422 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.422 EAL: request: mp_malloc_sync 00:22:52.422 EAL: No shared files mode enabled, IPC is disabled 00:22:52.422 EAL: Heap on socket 0 was shrunk by 258MB 00:22:52.680 EAL: Trying to obtain current memory policy. 00:22:52.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.938 EAL: Restoring previous memory policy: 0 00:22:52.938 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.938 EAL: request: mp_malloc_sync 00:22:52.938 EAL: No shared files mode enabled, IPC is disabled 00:22:52.938 EAL: Heap on socket 0 was expanded by 514MB 00:22:54.315 EAL: Calling mem event callback 'spdk:(nil)' 00:22:54.315 EAL: request: mp_malloc_sync 00:22:54.315 EAL: No shared files mode enabled, IPC is disabled 00:22:54.315 EAL: Heap on socket 0 was shrunk by 514MB 00:22:55.250 EAL: Trying to obtain current memory policy. 00:22:55.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:55.508 EAL: Restoring previous memory policy: 0 00:22:55.508 EAL: Calling mem event callback 'spdk:(nil)' 00:22:55.508 EAL: request: mp_malloc_sync 00:22:55.508 EAL: No shared files mode enabled, IPC is disabled 00:22:55.508 EAL: Heap on socket 0 was expanded by 1026MB 00:22:58.040 EAL: Calling mem event callback 'spdk:(nil)' 00:22:58.298 EAL: request: mp_malloc_sync 00:22:58.298 EAL: No shared files mode enabled, IPC is disabled 00:22:58.298 EAL: Heap on socket 0 was shrunk by 1026MB 00:23:00.200 passed 00:23:00.200 00:23:00.200 Run Summary: Type Total Ran Passed Failed Inactive 00:23:00.200 suites 1 1 n/a 0 0 00:23:00.200 tests 2 2 2 0 0 00:23:00.200 asserts 6496 6496 6496 0 n/a 00:23:00.200 00:23:00.200 Elapsed time = 10.282 seconds 00:23:00.200 EAL: Calling mem event callback 'spdk:(nil)' 00:23:00.200 EAL: request: mp_malloc_sync 00:23:00.200 EAL: No shared files mode enabled, IPC is disabled 00:23:00.200 EAL: Heap on socket 0 was shrunk by 2MB 00:23:00.200 EAL: No shared files mode enabled, IPC is disabled 00:23:00.200 EAL: No shared files mode enabled, IPC is disabled 00:23:00.200 EAL: No shared files mode enabled, IPC is disabled 00:23:00.200 ************************************ 00:23:00.200 END TEST env_vtophys 00:23:00.200 ************************************ 00:23:00.200 00:23:00.200 real 0m10.626s 00:23:00.200 user 0m9.514s 00:23:00.200 sys 0m0.956s 00:23:00.200 19:16:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:00.200 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:23:00.200 19:16:15 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:23:00.200 19:16:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:00.200 19:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:00.200 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:23:00.200 ************************************ 00:23:00.200 START TEST env_pci 00:23:00.200 ************************************ 00:23:00.200 19:16:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:23:00.200 00:23:00.200 00:23:00.200 CUnit - A unit testing framework for C - Version 2.1-3 00:23:00.200 http://cunit.sourceforge.net/ 00:23:00.200 00:23:00.200 00:23:00.200 Suite: pci 00:23:00.200 Test: pci_hook ...[2024-04-18 19:16:16.054092] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 110091 has claimed it 00:23:00.200 EAL: Cannot find device (10000:00:01.0) 00:23:00.200 EAL: Failed to attach device on primary process 00:23:00.200 passed 00:23:00.200 00:23:00.200 Run Summary: Type Total Ran Passed Failed Inactive 00:23:00.200 suites 1 1 n/a 0 0 00:23:00.200 tests 1 1 1 0 0 00:23:00.200 asserts 25 25 25 0 n/a 00:23:00.200 00:23:00.200 Elapsed time = 0.006 seconds 00:23:00.200 ************************************ 00:23:00.200 END TEST env_pci 00:23:00.200 ************************************ 00:23:00.200 00:23:00.200 real 0m0.098s 00:23:00.200 user 0m0.039s 00:23:00.200 sys 0m0.059s 00:23:00.200 19:16:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:00.200 19:16:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.460 19:16:16 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:23:00.460 19:16:16 -- env/env.sh@15 -- # uname 00:23:00.460 19:16:16 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:23:00.460 19:16:16 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:23:00.460 19:16:16 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:23:00.460 19:16:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:23:00.460 19:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:00.460 19:16:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.460 ************************************ 00:23:00.460 START TEST env_dpdk_post_init 00:23:00.460 ************************************ 00:23:00.460 19:16:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:23:00.460 EAL: Detected CPU lcores: 10 00:23:00.460 EAL: Detected NUMA nodes: 1 00:23:00.460 EAL: Detected static linkage of DPDK 00:23:00.460 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:23:00.460 EAL: Selected IOVA mode 'PA' 00:23:00.460 EAL: VFIO support initialized 00:23:00.718 TELEMETRY: No legacy callbacks, legacy socket not created 00:23:00.718 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:23:00.718 Starting DPDK initialization... 00:23:00.718 Starting SPDK post initialization... 00:23:00.718 SPDK NVMe probe 00:23:00.718 Attaching to 0000:00:10.0 00:23:00.718 Attached to 0000:00:10.0 00:23:00.718 Cleaning up... 00:23:00.718 00:23:00.718 real 0m0.265s 00:23:00.718 user 0m0.087s 00:23:00.718 sys 0m0.079s 00:23:00.718 19:16:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:00.718 ************************************ 00:23:00.718 19:16:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.718 END TEST env_dpdk_post_init 00:23:00.718 ************************************ 00:23:00.718 19:16:16 -- env/env.sh@26 -- # uname 00:23:00.718 19:16:16 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:23:00.718 19:16:16 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:23:00.718 19:16:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:00.718 19:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:00.718 19:16:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.718 ************************************ 00:23:00.718 START TEST env_mem_callbacks 00:23:00.718 ************************************ 00:23:00.718 19:16:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:23:00.977 EAL: Detected CPU lcores: 10 00:23:00.977 EAL: Detected NUMA nodes: 1 00:23:00.977 EAL: Detected static linkage of DPDK 00:23:00.977 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:23:00.977 EAL: Selected IOVA mode 'PA' 00:23:00.977 EAL: VFIO support initialized 00:23:00.977 TELEMETRY: No legacy callbacks, legacy socket not created 00:23:00.977 00:23:00.977 00:23:00.977 CUnit - A unit testing framework for C - Version 2.1-3 00:23:00.977 http://cunit.sourceforge.net/ 00:23:00.977 00:23:00.977 00:23:00.977 Suite: memory 00:23:00.977 Test: test ... 00:23:00.977 register 0x200000200000 2097152 00:23:00.977 malloc 3145728 00:23:00.977 register 0x200000400000 4194304 00:23:00.977 buf 0x2000004fffc0 len 3145728 PASSED 00:23:00.977 malloc 64 00:23:00.977 buf 0x2000004ffec0 len 64 PASSED 00:23:00.977 malloc 4194304 00:23:00.977 register 0x200000800000 6291456 00:23:00.977 buf 0x2000009fffc0 len 4194304 PASSED 00:23:00.977 free 0x2000004fffc0 3145728 00:23:00.977 free 0x2000004ffec0 64 00:23:00.977 unregister 0x200000400000 4194304 PASSED 00:23:00.977 free 0x2000009fffc0 4194304 00:23:00.977 unregister 0x200000800000 6291456 PASSED 00:23:00.977 malloc 8388608 00:23:00.977 register 0x200000400000 10485760 00:23:00.977 buf 0x2000005fffc0 len 8388608 PASSED 00:23:00.977 free 0x2000005fffc0 8388608 00:23:00.977 unregister 0x200000400000 10485760 PASSED 00:23:00.977 passed 00:23:00.977 00:23:00.977 Run Summary: Type Total Ran Passed Failed Inactive 00:23:00.977 suites 1 1 n/a 0 0 00:23:00.977 tests 1 1 1 0 0 00:23:00.977 asserts 15 15 15 0 n/a 00:23:00.977 00:23:00.977 Elapsed time = 0.071 seconds 00:23:01.235 00:23:01.235 real 0m0.331s 00:23:01.235 user 0m0.153s 00:23:01.235 sys 0m0.076s 00:23:01.235 19:16:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:01.235 19:16:16 -- common/autotest_common.sh@10 -- # set +x 00:23:01.235 ************************************ 00:23:01.235 END TEST env_mem_callbacks 00:23:01.235 ************************************ 00:23:01.235 ************************************ 00:23:01.235 END TEST env 00:23:01.235 ************************************ 00:23:01.235 00:23:01.235 real 0m12.171s 00:23:01.235 user 0m10.285s 00:23:01.235 sys 0m1.518s 00:23:01.235 19:16:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:01.235 19:16:16 -- common/autotest_common.sh@10 -- # set +x 00:23:01.235 19:16:17 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:23:01.235 19:16:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:01.235 19:16:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:01.235 19:16:17 -- common/autotest_common.sh@10 -- # set +x 00:23:01.235 ************************************ 00:23:01.235 START TEST rpc 00:23:01.235 ************************************ 00:23:01.235 19:16:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:23:01.235 * Looking for test storage... 00:23:01.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:23:01.494 19:16:17 -- rpc/rpc.sh@65 -- # spdk_pid=110236 00:23:01.494 19:16:17 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:23:01.494 19:16:17 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:01.494 19:16:17 -- rpc/rpc.sh@67 -- # waitforlisten 110236 00:23:01.494 19:16:17 -- common/autotest_common.sh@817 -- # '[' -z 110236 ']' 00:23:01.494 19:16:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.494 19:16:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:01.494 19:16:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.494 19:16:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:01.494 19:16:17 -- common/autotest_common.sh@10 -- # set +x 00:23:01.494 [2024-04-18 19:16:17.283707] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:01.494 [2024-04-18 19:16:17.284308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110236 ] 00:23:01.753 [2024-04-18 19:16:17.451978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.010 [2024-04-18 19:16:17.749658] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:23:02.010 [2024-04-18 19:16:17.749959] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 110236' to capture a snapshot of events at runtime. 00:23:02.010 [2024-04-18 19:16:17.750098] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.010 [2024-04-18 19:16:17.750157] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.010 [2024-04-18 19:16:17.750325] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid110236 for offline analysis/debug. 00:23:02.010 [2024-04-18 19:16:17.750435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.953 19:16:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:02.953 19:16:18 -- common/autotest_common.sh@850 -- # return 0 00:23:02.953 19:16:18 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:23:02.953 19:16:18 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:23:02.953 19:16:18 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:23:02.953 19:16:18 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:23:02.953 19:16:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:02.953 19:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:02.953 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:23:02.953 ************************************ 00:23:02.953 START TEST rpc_integrity 00:23:02.953 ************************************ 00:23:02.953 19:16:18 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:23:02.953 19:16:18 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:02.953 19:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.953 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:23:02.953 19:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.953 19:16:18 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:23:02.953 19:16:18 -- rpc/rpc.sh@13 -- # jq length 00:23:03.212 19:16:18 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:23:03.212 19:16:18 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:23:03.212 19:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.212 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.212 19:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.212 19:16:18 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:23:03.212 19:16:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:23:03.212 19:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.212 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.212 19:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.212 19:16:18 -- rpc/rpc.sh@16 -- # bdevs='[ 00:23:03.212 { 00:23:03.212 "name": "Malloc0", 00:23:03.212 "aliases": [ 00:23:03.212 "1a51bedd-abd8-4956-93ea-b97053fc5300" 00:23:03.212 ], 00:23:03.212 "product_name": "Malloc disk", 00:23:03.212 "block_size": 512, 00:23:03.212 "num_blocks": 16384, 00:23:03.212 "uuid": "1a51bedd-abd8-4956-93ea-b97053fc5300", 00:23:03.212 "assigned_rate_limits": { 00:23:03.212 "rw_ios_per_sec": 0, 00:23:03.212 "rw_mbytes_per_sec": 0, 00:23:03.212 "r_mbytes_per_sec": 0, 00:23:03.212 "w_mbytes_per_sec": 0 00:23:03.212 }, 00:23:03.212 "claimed": false, 00:23:03.212 "zoned": false, 00:23:03.212 "supported_io_types": { 00:23:03.212 "read": true, 00:23:03.212 "write": true, 00:23:03.212 "unmap": true, 00:23:03.212 "write_zeroes": true, 00:23:03.212 "flush": true, 00:23:03.212 "reset": true, 00:23:03.212 "compare": false, 00:23:03.212 "compare_and_write": false, 00:23:03.212 "abort": true, 00:23:03.212 "nvme_admin": false, 00:23:03.212 "nvme_io": false 00:23:03.212 }, 00:23:03.212 "memory_domains": [ 00:23:03.212 { 00:23:03.212 "dma_device_id": "system", 00:23:03.212 "dma_device_type": 1 00:23:03.212 }, 00:23:03.212 { 00:23:03.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.212 "dma_device_type": 2 00:23:03.212 } 00:23:03.212 ], 00:23:03.212 "driver_specific": {} 00:23:03.212 } 00:23:03.212 ]' 00:23:03.212 19:16:18 -- rpc/rpc.sh@17 -- # jq length 00:23:03.212 19:16:18 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:23:03.212 19:16:18 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:23:03.212 19:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.212 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.212 [2024-04-18 19:16:18.982779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:23:03.212 [2024-04-18 19:16:18.983031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.212 [2024-04-18 19:16:18.983113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:03.212 [2024-04-18 19:16:18.983230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.212 [2024-04-18 19:16:18.985832] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.212 [2024-04-18 19:16:18.985995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:23:03.212 Passthru0 00:23:03.212 19:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.212 19:16:18 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:23:03.212 19:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.212 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.212 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.212 19:16:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:23:03.212 { 00:23:03.212 "name": "Malloc0", 00:23:03.212 "aliases": [ 00:23:03.212 "1a51bedd-abd8-4956-93ea-b97053fc5300" 00:23:03.212 ], 00:23:03.213 "product_name": "Malloc disk", 00:23:03.213 "block_size": 512, 00:23:03.213 "num_blocks": 16384, 00:23:03.213 "uuid": "1a51bedd-abd8-4956-93ea-b97053fc5300", 00:23:03.213 "assigned_rate_limits": { 00:23:03.213 "rw_ios_per_sec": 0, 00:23:03.213 "rw_mbytes_per_sec": 0, 00:23:03.213 "r_mbytes_per_sec": 0, 00:23:03.213 "w_mbytes_per_sec": 0 00:23:03.213 }, 00:23:03.213 "claimed": true, 00:23:03.213 "claim_type": "exclusive_write", 00:23:03.213 "zoned": false, 00:23:03.213 "supported_io_types": { 00:23:03.213 "read": true, 00:23:03.213 "write": true, 00:23:03.213 "unmap": true, 00:23:03.213 "write_zeroes": true, 00:23:03.213 "flush": true, 00:23:03.213 "reset": true, 00:23:03.213 "compare": false, 00:23:03.213 "compare_and_write": false, 00:23:03.213 "abort": true, 00:23:03.213 "nvme_admin": false, 00:23:03.213 "nvme_io": false 00:23:03.213 }, 00:23:03.213 "memory_domains": [ 00:23:03.213 { 00:23:03.213 "dma_device_id": "system", 00:23:03.213 "dma_device_type": 1 00:23:03.213 }, 00:23:03.213 { 00:23:03.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.213 "dma_device_type": 2 00:23:03.213 } 00:23:03.213 ], 00:23:03.213 "driver_specific": {} 00:23:03.213 }, 00:23:03.213 { 00:23:03.213 "name": "Passthru0", 00:23:03.213 "aliases": [ 00:23:03.213 "5205b2a5-f4c5-5176-8a04-7a9b1195b037" 00:23:03.213 ], 00:23:03.213 "product_name": "passthru", 00:23:03.213 "block_size": 512, 00:23:03.213 "num_blocks": 16384, 00:23:03.213 "uuid": "5205b2a5-f4c5-5176-8a04-7a9b1195b037", 00:23:03.213 "assigned_rate_limits": { 00:23:03.213 "rw_ios_per_sec": 0, 00:23:03.213 "rw_mbytes_per_sec": 0, 00:23:03.213 "r_mbytes_per_sec": 0, 00:23:03.213 "w_mbytes_per_sec": 0 00:23:03.213 }, 00:23:03.213 "claimed": false, 00:23:03.213 "zoned": false, 00:23:03.213 "supported_io_types": { 00:23:03.213 "read": true, 00:23:03.213 "write": true, 00:23:03.213 "unmap": true, 00:23:03.213 "write_zeroes": true, 00:23:03.213 "flush": true, 00:23:03.213 "reset": true, 00:23:03.213 "compare": false, 00:23:03.213 "compare_and_write": false, 00:23:03.213 "abort": true, 00:23:03.213 "nvme_admin": false, 00:23:03.213 "nvme_io": false 00:23:03.213 }, 00:23:03.213 "memory_domains": [ 00:23:03.213 { 00:23:03.213 "dma_device_id": "system", 00:23:03.213 "dma_device_type": 1 00:23:03.213 }, 00:23:03.213 { 00:23:03.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.213 "dma_device_type": 2 00:23:03.213 } 00:23:03.213 ], 00:23:03.213 "driver_specific": { 00:23:03.213 "passthru": { 00:23:03.213 "name": "Passthru0", 00:23:03.213 "base_bdev_name": "Malloc0" 00:23:03.213 } 00:23:03.213 } 00:23:03.213 } 00:23:03.213 ]' 00:23:03.213 19:16:19 -- rpc/rpc.sh@21 -- # jq length 00:23:03.213 19:16:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:23:03.213 19:16:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:23:03.213 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.213 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.213 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.213 19:16:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:03.213 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.213 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.213 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.213 19:16:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:03.213 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.213 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.213 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.213 19:16:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:23:03.213 19:16:19 -- rpc/rpc.sh@26 -- # jq length 00:23:03.472 19:16:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:23:03.472 ************************************ 00:23:03.472 END TEST rpc_integrity 00:23:03.472 ************************************ 00:23:03.472 00:23:03.472 real 0m0.343s 00:23:03.472 user 0m0.177s 00:23:03.472 sys 0m0.044s 00:23:03.472 19:16:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:03.472 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.472 19:16:19 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:23:03.472 19:16:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:03.472 19:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:03.472 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.472 ************************************ 00:23:03.472 START TEST rpc_plugins 00:23:03.472 ************************************ 00:23:03.472 19:16:19 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:23:03.472 19:16:19 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:23:03.472 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.472 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.472 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.473 19:16:19 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:23:03.473 19:16:19 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:23:03.473 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.473 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.473 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.473 19:16:19 -- rpc/rpc.sh@31 -- # bdevs='[ 00:23:03.473 { 00:23:03.473 "name": "Malloc1", 00:23:03.473 "aliases": [ 00:23:03.473 "ce9dbe8b-1fd9-46e8-af6c-db67183efdc8" 00:23:03.473 ], 00:23:03.473 "product_name": "Malloc disk", 00:23:03.473 "block_size": 4096, 00:23:03.473 "num_blocks": 256, 00:23:03.473 "uuid": "ce9dbe8b-1fd9-46e8-af6c-db67183efdc8", 00:23:03.473 "assigned_rate_limits": { 00:23:03.473 "rw_ios_per_sec": 0, 00:23:03.473 "rw_mbytes_per_sec": 0, 00:23:03.473 "r_mbytes_per_sec": 0, 00:23:03.473 "w_mbytes_per_sec": 0 00:23:03.473 }, 00:23:03.473 "claimed": false, 00:23:03.473 "zoned": false, 00:23:03.473 "supported_io_types": { 00:23:03.473 "read": true, 00:23:03.473 "write": true, 00:23:03.473 "unmap": true, 00:23:03.473 "write_zeroes": true, 00:23:03.473 "flush": true, 00:23:03.473 "reset": true, 00:23:03.473 "compare": false, 00:23:03.473 "compare_and_write": false, 00:23:03.473 "abort": true, 00:23:03.473 "nvme_admin": false, 00:23:03.473 "nvme_io": false 00:23:03.473 }, 00:23:03.473 "memory_domains": [ 00:23:03.473 { 00:23:03.473 "dma_device_id": "system", 00:23:03.473 "dma_device_type": 1 00:23:03.473 }, 00:23:03.473 { 00:23:03.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.473 "dma_device_type": 2 00:23:03.473 } 00:23:03.473 ], 00:23:03.473 "driver_specific": {} 00:23:03.473 } 00:23:03.473 ]' 00:23:03.473 19:16:19 -- rpc/rpc.sh@32 -- # jq length 00:23:03.473 19:16:19 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:23:03.473 19:16:19 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:23:03.473 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.473 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.473 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.473 19:16:19 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:23:03.473 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.473 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.473 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.473 19:16:19 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:23:03.473 19:16:19 -- rpc/rpc.sh@36 -- # jq length 00:23:03.730 19:16:19 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:23:03.730 ************************************ 00:23:03.731 END TEST rpc_plugins 00:23:03.731 ************************************ 00:23:03.731 00:23:03.731 real 0m0.190s 00:23:03.731 user 0m0.136s 00:23:03.731 sys 0m0.009s 00:23:03.731 19:16:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:03.731 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.731 19:16:19 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:23:03.731 19:16:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:03.731 19:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:03.731 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.731 ************************************ 00:23:03.731 START TEST rpc_trace_cmd_test 00:23:03.731 ************************************ 00:23:03.731 19:16:19 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:23:03.731 19:16:19 -- rpc/rpc.sh@40 -- # local info 00:23:03.731 19:16:19 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:23:03.731 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.731 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.731 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.731 19:16:19 -- rpc/rpc.sh@42 -- # info='{ 00:23:03.731 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid110236", 00:23:03.731 "tpoint_group_mask": "0x8", 00:23:03.731 "iscsi_conn": { 00:23:03.731 "mask": "0x2", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "scsi": { 00:23:03.731 "mask": "0x4", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "bdev": { 00:23:03.731 "mask": "0x8", 00:23:03.731 "tpoint_mask": "0xffffffffffffffff" 00:23:03.731 }, 00:23:03.731 "nvmf_rdma": { 00:23:03.731 "mask": "0x10", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "nvmf_tcp": { 00:23:03.731 "mask": "0x20", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "ftl": { 00:23:03.731 "mask": "0x40", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "blobfs": { 00:23:03.731 "mask": "0x80", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "dsa": { 00:23:03.731 "mask": "0x200", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "thread": { 00:23:03.731 "mask": "0x400", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "nvme_pcie": { 00:23:03.731 "mask": "0x800", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "iaa": { 00:23:03.731 "mask": "0x1000", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "nvme_tcp": { 00:23:03.731 "mask": "0x2000", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "bdev_nvme": { 00:23:03.731 "mask": "0x4000", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 }, 00:23:03.731 "sock": { 00:23:03.731 "mask": "0x8000", 00:23:03.731 "tpoint_mask": "0x0" 00:23:03.731 } 00:23:03.731 }' 00:23:03.731 19:16:19 -- rpc/rpc.sh@43 -- # jq length 00:23:03.731 19:16:19 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:23:03.731 19:16:19 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:23:03.989 19:16:19 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:23:03.989 19:16:19 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:23:03.989 19:16:19 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:23:03.989 19:16:19 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:23:03.989 19:16:19 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:23:03.989 19:16:19 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:23:03.989 19:16:19 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:23:03.989 00:23:03.989 real 0m0.281s 00:23:03.989 user 0m0.239s 00:23:03.989 sys 0m0.034s 00:23:03.989 19:16:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:03.989 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.989 ************************************ 00:23:03.989 END TEST rpc_trace_cmd_test 00:23:03.989 ************************************ 00:23:03.989 19:16:19 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:23:03.989 19:16:19 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:23:03.989 19:16:19 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:23:03.989 19:16:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:03.989 19:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:03.989 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.989 ************************************ 00:23:03.989 START TEST rpc_daemon_integrity 00:23:03.989 ************************************ 00:23:03.989 19:16:19 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:23:03.989 19:16:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:03.989 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.989 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:03.989 19:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.989 19:16:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:23:03.989 19:16:19 -- rpc/rpc.sh@13 -- # jq length 00:23:04.247 19:16:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:23:04.247 19:16:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:23:04.247 19:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.247 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:04.247 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.247 19:16:20 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:23:04.247 19:16:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:23:04.247 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.247 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:23:04.247 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.247 19:16:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:23:04.247 { 00:23:04.247 "name": "Malloc2", 00:23:04.247 "aliases": [ 00:23:04.247 "7c10f8f9-cb7b-49e5-ad15-9bc7ecb7219f" 00:23:04.247 ], 00:23:04.247 "product_name": "Malloc disk", 00:23:04.247 "block_size": 512, 00:23:04.247 "num_blocks": 16384, 00:23:04.247 "uuid": "7c10f8f9-cb7b-49e5-ad15-9bc7ecb7219f", 00:23:04.247 "assigned_rate_limits": { 00:23:04.247 "rw_ios_per_sec": 0, 00:23:04.247 "rw_mbytes_per_sec": 0, 00:23:04.247 "r_mbytes_per_sec": 0, 00:23:04.247 "w_mbytes_per_sec": 0 00:23:04.247 }, 00:23:04.247 "claimed": false, 00:23:04.247 "zoned": false, 00:23:04.247 "supported_io_types": { 00:23:04.247 "read": true, 00:23:04.247 "write": true, 00:23:04.247 "unmap": true, 00:23:04.247 "write_zeroes": true, 00:23:04.247 "flush": true, 00:23:04.247 "reset": true, 00:23:04.247 "compare": false, 00:23:04.247 "compare_and_write": false, 00:23:04.247 "abort": true, 00:23:04.247 "nvme_admin": false, 00:23:04.248 "nvme_io": false 00:23:04.248 }, 00:23:04.248 "memory_domains": [ 00:23:04.248 { 00:23:04.248 "dma_device_id": "system", 00:23:04.248 "dma_device_type": 1 00:23:04.248 }, 00:23:04.248 { 00:23:04.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.248 "dma_device_type": 2 00:23:04.248 } 00:23:04.248 ], 00:23:04.248 "driver_specific": {} 00:23:04.248 } 00:23:04.248 ]' 00:23:04.248 19:16:20 -- rpc/rpc.sh@17 -- # jq length 00:23:04.248 19:16:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:23:04.248 19:16:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:23:04.248 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.248 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:23:04.248 [2024-04-18 19:16:20.094157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:23:04.248 [2024-04-18 19:16:20.094385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.248 [2024-04-18 19:16:20.094457] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:04.248 [2024-04-18 19:16:20.094558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.248 [2024-04-18 19:16:20.097360] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.248 [2024-04-18 19:16:20.097532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:23:04.248 Passthru0 00:23:04.248 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.248 19:16:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:23:04.248 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.248 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:23:04.248 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.248 19:16:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:23:04.248 { 00:23:04.248 "name": "Malloc2", 00:23:04.248 "aliases": [ 00:23:04.248 "7c10f8f9-cb7b-49e5-ad15-9bc7ecb7219f" 00:23:04.248 ], 00:23:04.248 "product_name": "Malloc disk", 00:23:04.248 "block_size": 512, 00:23:04.248 "num_blocks": 16384, 00:23:04.248 "uuid": "7c10f8f9-cb7b-49e5-ad15-9bc7ecb7219f", 00:23:04.248 "assigned_rate_limits": { 00:23:04.248 "rw_ios_per_sec": 0, 00:23:04.248 "rw_mbytes_per_sec": 0, 00:23:04.248 "r_mbytes_per_sec": 0, 00:23:04.248 "w_mbytes_per_sec": 0 00:23:04.248 }, 00:23:04.248 "claimed": true, 00:23:04.248 "claim_type": "exclusive_write", 00:23:04.248 "zoned": false, 00:23:04.248 "supported_io_types": { 00:23:04.248 "read": true, 00:23:04.248 "write": true, 00:23:04.248 "unmap": true, 00:23:04.248 "write_zeroes": true, 00:23:04.248 "flush": true, 00:23:04.248 "reset": true, 00:23:04.248 "compare": false, 00:23:04.248 "compare_and_write": false, 00:23:04.248 "abort": true, 00:23:04.248 "nvme_admin": false, 00:23:04.248 "nvme_io": false 00:23:04.248 }, 00:23:04.248 "memory_domains": [ 00:23:04.248 { 00:23:04.248 "dma_device_id": "system", 00:23:04.248 "dma_device_type": 1 00:23:04.248 }, 00:23:04.248 { 00:23:04.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.248 "dma_device_type": 2 00:23:04.248 } 00:23:04.248 ], 00:23:04.248 "driver_specific": {} 00:23:04.248 }, 00:23:04.248 { 00:23:04.248 "name": "Passthru0", 00:23:04.248 "aliases": [ 00:23:04.248 "dc8781b5-35fb-5112-9f81-a5cbff06c16d" 00:23:04.248 ], 00:23:04.248 "product_name": "passthru", 00:23:04.248 "block_size": 512, 00:23:04.248 "num_blocks": 16384, 00:23:04.248 "uuid": "dc8781b5-35fb-5112-9f81-a5cbff06c16d", 00:23:04.248 "assigned_rate_limits": { 00:23:04.248 "rw_ios_per_sec": 0, 00:23:04.248 "rw_mbytes_per_sec": 0, 00:23:04.248 "r_mbytes_per_sec": 0, 00:23:04.248 "w_mbytes_per_sec": 0 00:23:04.248 }, 00:23:04.248 "claimed": false, 00:23:04.248 "zoned": false, 00:23:04.248 "supported_io_types": { 00:23:04.248 "read": true, 00:23:04.248 "write": true, 00:23:04.248 "unmap": true, 00:23:04.248 "write_zeroes": true, 00:23:04.248 "flush": true, 00:23:04.248 "reset": true, 00:23:04.248 "compare": false, 00:23:04.248 "compare_and_write": false, 00:23:04.248 "abort": true, 00:23:04.248 "nvme_admin": false, 00:23:04.248 "nvme_io": false 00:23:04.248 }, 00:23:04.248 "memory_domains": [ 00:23:04.248 { 00:23:04.248 "dma_device_id": "system", 00:23:04.248 "dma_device_type": 1 00:23:04.248 }, 00:23:04.248 { 00:23:04.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.248 "dma_device_type": 2 00:23:04.248 } 00:23:04.248 ], 00:23:04.248 "driver_specific": { 00:23:04.248 "passthru": { 00:23:04.248 "name": "Passthru0", 00:23:04.248 "base_bdev_name": "Malloc2" 00:23:04.248 } 00:23:04.248 } 00:23:04.248 } 00:23:04.248 ]' 00:23:04.248 19:16:20 -- rpc/rpc.sh@21 -- # jq length 00:23:04.248 19:16:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:23:04.248 19:16:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:23:04.248 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.248 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:23:04.506 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.506 19:16:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:23:04.507 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.507 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:23:04.507 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.507 19:16:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:04.507 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.507 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:23:04.507 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.507 19:16:20 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:23:04.507 19:16:20 -- rpc/rpc.sh@26 -- # jq length 00:23:04.507 19:16:20 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:23:04.507 00:23:04.507 real 0m0.380s 00:23:04.507 user 0m0.232s 00:23:04.507 sys 0m0.045s 00:23:04.507 19:16:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:04.507 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:23:04.507 ************************************ 00:23:04.507 END TEST rpc_daemon_integrity 00:23:04.507 ************************************ 00:23:04.507 19:16:20 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:04.507 19:16:20 -- rpc/rpc.sh@84 -- # killprocess 110236 00:23:04.507 19:16:20 -- common/autotest_common.sh@936 -- # '[' -z 110236 ']' 00:23:04.507 19:16:20 -- common/autotest_common.sh@940 -- # kill -0 110236 00:23:04.507 19:16:20 -- common/autotest_common.sh@941 -- # uname 00:23:04.507 19:16:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:04.507 19:16:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110236 00:23:04.507 killing process with pid 110236 00:23:04.507 19:16:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:04.507 19:16:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:04.507 19:16:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110236' 00:23:04.507 19:16:20 -- common/autotest_common.sh@955 -- # kill 110236 00:23:04.507 19:16:20 -- common/autotest_common.sh@960 -- # wait 110236 00:23:07.788 ************************************ 00:23:07.788 END TEST rpc 00:23:07.788 ************************************ 00:23:07.788 00:23:07.788 real 0m5.918s 00:23:07.788 user 0m6.835s 00:23:07.788 sys 0m0.836s 00:23:07.788 19:16:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:07.788 19:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:07.788 19:16:23 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:23:07.788 19:16:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:07.788 19:16:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:07.788 19:16:23 -- common/autotest_common.sh@10 -- # set +x 00:23:07.788 ************************************ 00:23:07.788 START TEST skip_rpc 00:23:07.788 ************************************ 00:23:07.788 19:16:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:23:07.788 * Looking for test storage... 00:23:07.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:23:07.788 19:16:23 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:07.788 19:16:23 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:23:07.788 19:16:23 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:23:07.788 19:16:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:07.788 19:16:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:07.788 19:16:23 -- common/autotest_common.sh@10 -- # set +x 00:23:07.788 ************************************ 00:23:07.788 START TEST skip_rpc 00:23:07.788 ************************************ 00:23:07.788 19:16:23 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:23:07.788 19:16:23 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=110544 00:23:07.788 19:16:23 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:23:07.789 19:16:23 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:07.789 19:16:23 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:23:07.789 [2024-04-18 19:16:23.338065] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:07.789 [2024-04-18 19:16:23.338715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110544 ] 00:23:07.789 [2024-04-18 19:16:23.513333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.047 [2024-04-18 19:16:23.729425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.313 19:16:28 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:23:13.314 19:16:28 -- common/autotest_common.sh@638 -- # local es=0 00:23:13.314 19:16:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:23:13.314 19:16:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:13.314 19:16:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.314 19:16:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:13.314 19:16:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.314 19:16:28 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:23:13.314 19:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.314 19:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 19:16:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:13.314 19:16:28 -- common/autotest_common.sh@641 -- # es=1 00:23:13.314 19:16:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:13.314 19:16:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:13.314 19:16:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:13.314 19:16:28 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:23:13.314 19:16:28 -- rpc/skip_rpc.sh@23 -- # killprocess 110544 00:23:13.314 19:16:28 -- common/autotest_common.sh@936 -- # '[' -z 110544 ']' 00:23:13.314 19:16:28 -- common/autotest_common.sh@940 -- # kill -0 110544 00:23:13.314 19:16:28 -- common/autotest_common.sh@941 -- # uname 00:23:13.314 19:16:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.314 19:16:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110544 00:23:13.314 19:16:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:13.314 19:16:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:13.314 19:16:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110544' 00:23:13.314 killing process with pid 110544 00:23:13.314 19:16:28 -- common/autotest_common.sh@955 -- # kill 110544 00:23:13.314 19:16:28 -- common/autotest_common.sh@960 -- # wait 110544 00:23:15.217 ************************************ 00:23:15.217 END TEST skip_rpc 00:23:15.217 ************************************ 00:23:15.217 00:23:15.217 real 0m7.675s 00:23:15.217 user 0m7.231s 00:23:15.217 sys 0m0.357s 00:23:15.217 19:16:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:15.217 19:16:30 -- common/autotest_common.sh@10 -- # set +x 00:23:15.217 19:16:30 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:23:15.217 19:16:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:15.217 19:16:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:15.217 19:16:30 -- common/autotest_common.sh@10 -- # set +x 00:23:15.217 ************************************ 00:23:15.217 START TEST skip_rpc_with_json 00:23:15.217 ************************************ 00:23:15.217 19:16:30 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:23:15.217 19:16:30 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:23:15.217 19:16:30 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=110686 00:23:15.217 19:16:30 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:15.217 19:16:30 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:15.217 19:16:30 -- rpc/skip_rpc.sh@31 -- # waitforlisten 110686 00:23:15.217 19:16:30 -- common/autotest_common.sh@817 -- # '[' -z 110686 ']' 00:23:15.217 19:16:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.217 19:16:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:15.217 19:16:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.217 19:16:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:15.217 19:16:30 -- common/autotest_common.sh@10 -- # set +x 00:23:15.217 [2024-04-18 19:16:31.069341] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:15.217 [2024-04-18 19:16:31.069658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110686 ] 00:23:15.476 [2024-04-18 19:16:31.230590] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.734 [2024-04-18 19:16:31.460495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.671 19:16:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:16.671 19:16:32 -- common/autotest_common.sh@850 -- # return 0 00:23:16.671 19:16:32 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:23:16.671 19:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.671 19:16:32 -- common/autotest_common.sh@10 -- # set +x 00:23:16.671 [2024-04-18 19:16:32.444879] nvmf_rpc.c:2534:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:23:16.671 request: 00:23:16.671 { 00:23:16.671 "trtype": "tcp", 00:23:16.671 "method": "nvmf_get_transports", 00:23:16.671 "req_id": 1 00:23:16.671 } 00:23:16.671 Got JSON-RPC error response 00:23:16.671 response: 00:23:16.671 { 00:23:16.671 "code": -19, 00:23:16.671 "message": "No such device" 00:23:16.671 } 00:23:16.671 19:16:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:16.671 19:16:32 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:23:16.671 19:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.671 19:16:32 -- common/autotest_common.sh@10 -- # set +x 00:23:16.671 [2024-04-18 19:16:32.452983] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.671 19:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.671 19:16:32 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:23:16.671 19:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.671 19:16:32 -- common/autotest_common.sh@10 -- # set +x 00:23:16.671 19:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.671 19:16:32 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:16.671 { 00:23:16.671 "subsystems": [ 00:23:16.671 { 00:23:16.672 "subsystem": "scheduler", 00:23:16.672 "config": [ 00:23:16.672 { 00:23:16.672 "method": "framework_set_scheduler", 00:23:16.672 "params": { 00:23:16.672 "name": "static" 00:23:16.672 } 00:23:16.672 } 00:23:16.672 ] 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "subsystem": "vmd", 00:23:16.672 "config": [] 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "subsystem": "sock", 00:23:16.672 "config": [ 00:23:16.672 { 00:23:16.672 "method": "sock_impl_set_options", 00:23:16.672 "params": { 00:23:16.672 "impl_name": "posix", 00:23:16.672 "recv_buf_size": 2097152, 00:23:16.672 "send_buf_size": 2097152, 00:23:16.672 "enable_recv_pipe": true, 00:23:16.672 "enable_quickack": false, 00:23:16.672 "enable_placement_id": 0, 00:23:16.672 "enable_zerocopy_send_server": true, 00:23:16.672 "enable_zerocopy_send_client": false, 00:23:16.672 "zerocopy_threshold": 0, 00:23:16.672 "tls_version": 0, 00:23:16.672 "enable_ktls": false 00:23:16.672 } 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "method": "sock_impl_set_options", 00:23:16.672 "params": { 00:23:16.672 "impl_name": "ssl", 00:23:16.672 "recv_buf_size": 4096, 00:23:16.672 "send_buf_size": 4096, 00:23:16.672 "enable_recv_pipe": true, 00:23:16.672 "enable_quickack": false, 00:23:16.672 "enable_placement_id": 0, 00:23:16.672 "enable_zerocopy_send_server": true, 00:23:16.672 "enable_zerocopy_send_client": false, 00:23:16.672 "zerocopy_threshold": 0, 00:23:16.672 "tls_version": 0, 00:23:16.672 "enable_ktls": false 00:23:16.672 } 00:23:16.672 } 00:23:16.672 ] 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "subsystem": "iobuf", 00:23:16.672 "config": [ 00:23:16.672 { 00:23:16.672 "method": "iobuf_set_options", 00:23:16.672 "params": { 00:23:16.672 "small_pool_count": 8192, 00:23:16.672 "large_pool_count": 1024, 00:23:16.672 "small_bufsize": 8192, 00:23:16.672 "large_bufsize": 135168 00:23:16.672 } 00:23:16.672 } 00:23:16.672 ] 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "subsystem": "keyring", 00:23:16.672 "config": [] 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "subsystem": "accel", 00:23:16.672 "config": [ 00:23:16.672 { 00:23:16.672 "method": "accel_set_options", 00:23:16.672 "params": { 00:23:16.672 "small_cache_size": 128, 00:23:16.672 "large_cache_size": 16, 00:23:16.672 "task_count": 2048, 00:23:16.672 "sequence_count": 2048, 00:23:16.672 "buf_count": 2048 00:23:16.672 } 00:23:16.672 } 00:23:16.672 ] 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "subsystem": "bdev", 00:23:16.672 "config": [ 00:23:16.672 { 00:23:16.672 "method": "bdev_set_options", 00:23:16.672 "params": { 00:23:16.672 "bdev_io_pool_size": 65535, 00:23:16.672 "bdev_io_cache_size": 256, 00:23:16.672 "bdev_auto_examine": true, 00:23:16.672 "iobuf_small_cache_size": 128, 00:23:16.672 "iobuf_large_cache_size": 16 00:23:16.672 } 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "method": "bdev_raid_set_options", 00:23:16.672 "params": { 00:23:16.672 "process_window_size_kb": 1024 00:23:16.672 } 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "method": "bdev_nvme_set_options", 00:23:16.672 "params": { 00:23:16.672 "action_on_timeout": "none", 00:23:16.672 "timeout_us": 0, 00:23:16.672 "timeout_admin_us": 0, 00:23:16.672 "keep_alive_timeout_ms": 10000, 00:23:16.672 "arbitration_burst": 0, 00:23:16.672 "low_priority_weight": 0, 00:23:16.672 "medium_priority_weight": 0, 00:23:16.672 "high_priority_weight": 0, 00:23:16.672 "nvme_adminq_poll_period_us": 10000, 00:23:16.672 "nvme_ioq_poll_period_us": 0, 00:23:16.672 "io_queue_requests": 0, 00:23:16.672 "delay_cmd_submit": true, 00:23:16.672 "transport_retry_count": 4, 00:23:16.672 "bdev_retry_count": 3, 00:23:16.672 "transport_ack_timeout": 0, 00:23:16.672 "ctrlr_loss_timeout_sec": 0, 00:23:16.672 "reconnect_delay_sec": 0, 00:23:16.672 "fast_io_fail_timeout_sec": 0, 00:23:16.672 "disable_auto_failback": false, 00:23:16.672 "generate_uuids": false, 00:23:16.672 "transport_tos": 0, 00:23:16.672 "nvme_error_stat": false, 00:23:16.672 "rdma_srq_size": 0, 00:23:16.672 "io_path_stat": false, 00:23:16.672 "allow_accel_sequence": false, 00:23:16.672 "rdma_max_cq_size": 0, 00:23:16.672 "rdma_cm_event_timeout_ms": 0, 00:23:16.672 "dhchap_digests": [ 00:23:16.672 "sha256", 00:23:16.672 "sha384", 00:23:16.672 "sha512" 00:23:16.672 ], 00:23:16.672 "dhchap_dhgroups": [ 00:23:16.672 "null", 00:23:16.672 "ffdhe2048", 00:23:16.672 "ffdhe3072", 00:23:16.672 "ffdhe4096", 00:23:16.672 "ffdhe6144", 00:23:16.672 "ffdhe8192" 00:23:16.672 ] 00:23:16.672 } 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "method": "bdev_nvme_set_hotplug", 00:23:16.672 "params": { 00:23:16.672 "period_us": 100000, 00:23:16.672 "enable": false 00:23:16.672 } 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "method": "bdev_iscsi_set_options", 00:23:16.672 "params": { 00:23:16.672 "timeout_sec": 30 00:23:16.672 } 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "method": "bdev_wait_for_examine" 00:23:16.672 } 00:23:16.672 ] 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "subsystem": "nvmf", 00:23:16.672 "config": [ 00:23:16.672 { 00:23:16.672 "method": "nvmf_set_config", 00:23:16.672 "params": { 00:23:16.672 "discovery_filter": "match_any", 00:23:16.672 "admin_cmd_passthru": { 00:23:16.672 "identify_ctrlr": false 00:23:16.672 } 00:23:16.672 } 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "method": "nvmf_set_max_subsystems", 00:23:16.672 "params": { 00:23:16.672 "max_subsystems": 1024 00:23:16.672 } 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "method": "nvmf_set_crdt", 00:23:16.672 "params": { 00:23:16.672 "crdt1": 0, 00:23:16.672 "crdt2": 0, 00:23:16.672 "crdt3": 0 00:23:16.672 } 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "method": "nvmf_create_transport", 00:23:16.672 "params": { 00:23:16.672 "trtype": "TCP", 00:23:16.672 "max_queue_depth": 128, 00:23:16.672 "max_io_qpairs_per_ctrlr": 127, 00:23:16.672 "in_capsule_data_size": 4096, 00:23:16.672 "max_io_size": 131072, 00:23:16.672 "io_unit_size": 131072, 00:23:16.672 "max_aq_depth": 128, 00:23:16.672 "num_shared_buffers": 511, 00:23:16.672 "buf_cache_size": 4294967295, 00:23:16.672 "dif_insert_or_strip": false, 00:23:16.672 "zcopy": false, 00:23:16.672 "c2h_success": true, 00:23:16.672 "sock_priority": 0, 00:23:16.672 "abort_timeout_sec": 1, 00:23:16.672 "ack_timeout": 0 00:23:16.672 } 00:23:16.672 } 00:23:16.672 ] 00:23:16.672 }, 00:23:16.672 { 00:23:16.672 "subsystem": "nbd", 00:23:16.672 "config": [] 00:23:16.673 }, 00:23:16.673 { 00:23:16.673 "subsystem": "vhost_blk", 00:23:16.673 "config": [] 00:23:16.673 }, 00:23:16.673 { 00:23:16.673 "subsystem": "scsi", 00:23:16.673 "config": null 00:23:16.673 }, 00:23:16.673 { 00:23:16.673 "subsystem": "iscsi", 00:23:16.673 "config": [ 00:23:16.673 { 00:23:16.673 "method": "iscsi_set_options", 00:23:16.673 "params": { 00:23:16.673 "node_base": "iqn.2016-06.io.spdk", 00:23:16.673 "max_sessions": 128, 00:23:16.673 "max_connections_per_session": 2, 00:23:16.673 "max_queue_depth": 64, 00:23:16.673 "default_time2wait": 2, 00:23:16.673 "default_time2retain": 20, 00:23:16.673 "first_burst_length": 8192, 00:23:16.673 "immediate_data": true, 00:23:16.673 "allow_duplicated_isid": false, 00:23:16.673 "error_recovery_level": 0, 00:23:16.673 "nop_timeout": 60, 00:23:16.673 "nop_in_interval": 30, 00:23:16.673 "disable_chap": false, 00:23:16.673 "require_chap": false, 00:23:16.673 "mutual_chap": false, 00:23:16.673 "chap_group": 0, 00:23:16.673 "max_large_datain_per_connection": 64, 00:23:16.673 "max_r2t_per_connection": 4, 00:23:16.673 "pdu_pool_size": 36864, 00:23:16.673 "immediate_data_pool_size": 16384, 00:23:16.673 "data_out_pool_size": 2048 00:23:16.673 } 00:23:16.673 } 00:23:16.673 ] 00:23:16.673 }, 00:23:16.673 { 00:23:16.673 "subsystem": "vhost_scsi", 00:23:16.673 "config": [] 00:23:16.673 } 00:23:16.673 ] 00:23:16.673 } 00:23:16.673 19:16:32 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:23:16.673 19:16:32 -- rpc/skip_rpc.sh@40 -- # killprocess 110686 00:23:16.673 19:16:32 -- common/autotest_common.sh@936 -- # '[' -z 110686 ']' 00:23:16.673 19:16:32 -- common/autotest_common.sh@940 -- # kill -0 110686 00:23:16.673 19:16:32 -- common/autotest_common.sh@941 -- # uname 00:23:16.673 19:16:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:16.673 19:16:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110686 00:23:16.673 19:16:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:16.673 19:16:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:16.673 19:16:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110686' 00:23:16.673 killing process with pid 110686 00:23:16.673 19:16:32 -- common/autotest_common.sh@955 -- # kill 110686 00:23:16.673 19:16:32 -- common/autotest_common.sh@960 -- # wait 110686 00:23:19.955 19:16:35 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=110750 00:23:19.956 19:16:35 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:19.956 19:16:35 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:23:25.223 19:16:40 -- rpc/skip_rpc.sh@50 -- # killprocess 110750 00:23:25.223 19:16:40 -- common/autotest_common.sh@936 -- # '[' -z 110750 ']' 00:23:25.223 19:16:40 -- common/autotest_common.sh@940 -- # kill -0 110750 00:23:25.223 19:16:40 -- common/autotest_common.sh@941 -- # uname 00:23:25.223 19:16:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:25.223 19:16:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110750 00:23:25.223 killing process with pid 110750 00:23:25.223 19:16:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:25.223 19:16:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:25.223 19:16:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110750' 00:23:25.223 19:16:40 -- common/autotest_common.sh@955 -- # kill 110750 00:23:25.223 19:16:40 -- common/autotest_common.sh@960 -- # wait 110750 00:23:27.123 19:16:42 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:23:27.123 19:16:42 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:23:27.123 ************************************ 00:23:27.123 END TEST skip_rpc_with_json 00:23:27.123 ************************************ 00:23:27.123 00:23:27.123 real 0m11.858s 00:23:27.123 user 0m11.434s 00:23:27.123 sys 0m0.835s 00:23:27.123 19:16:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:27.123 19:16:42 -- common/autotest_common.sh@10 -- # set +x 00:23:27.123 19:16:42 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:23:27.123 19:16:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:27.123 19:16:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:27.123 19:16:42 -- common/autotest_common.sh@10 -- # set +x 00:23:27.123 ************************************ 00:23:27.123 START TEST skip_rpc_with_delay 00:23:27.123 ************************************ 00:23:27.123 19:16:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:23:27.123 19:16:42 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:27.123 19:16:42 -- common/autotest_common.sh@638 -- # local es=0 00:23:27.123 19:16:42 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:27.123 19:16:42 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.123 19:16:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:27.123 19:16:42 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.123 19:16:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:27.123 19:16:42 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.123 19:16:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:27.123 19:16:42 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.123 19:16:42 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:23:27.123 19:16:42 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:27.123 [2024-04-18 19:16:43.045412] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:23:27.123 [2024-04-18 19:16:43.045876] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:23:27.381 19:16:43 -- common/autotest_common.sh@641 -- # es=1 00:23:27.381 19:16:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:27.381 19:16:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:27.381 19:16:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:27.381 00:23:27.381 real 0m0.158s 00:23:27.381 user 0m0.076s 00:23:27.381 sys 0m0.080s 00:23:27.381 19:16:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:27.381 19:16:43 -- common/autotest_common.sh@10 -- # set +x 00:23:27.381 ************************************ 00:23:27.381 END TEST skip_rpc_with_delay 00:23:27.381 ************************************ 00:23:27.381 19:16:43 -- rpc/skip_rpc.sh@77 -- # uname 00:23:27.381 19:16:43 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:23:27.382 19:16:43 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:23:27.382 19:16:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:27.382 19:16:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:27.382 19:16:43 -- common/autotest_common.sh@10 -- # set +x 00:23:27.382 ************************************ 00:23:27.382 START TEST exit_on_failed_rpc_init 00:23:27.382 ************************************ 00:23:27.382 19:16:43 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:23:27.382 19:16:43 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=110926 00:23:27.382 19:16:43 -- rpc/skip_rpc.sh@63 -- # waitforlisten 110926 00:23:27.382 19:16:43 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:27.382 19:16:43 -- common/autotest_common.sh@817 -- # '[' -z 110926 ']' 00:23:27.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.382 19:16:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.382 19:16:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:27.382 19:16:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.382 19:16:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:27.382 19:16:43 -- common/autotest_common.sh@10 -- # set +x 00:23:27.382 [2024-04-18 19:16:43.304523] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:27.382 [2024-04-18 19:16:43.304916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110926 ] 00:23:27.640 [2024-04-18 19:16:43.487435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.898 [2024-04-18 19:16:43.796186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.287 19:16:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:29.287 19:16:44 -- common/autotest_common.sh@850 -- # return 0 00:23:29.287 19:16:44 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:29.287 19:16:44 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:23:29.287 19:16:44 -- common/autotest_common.sh@638 -- # local es=0 00:23:29.287 19:16:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:23:29.287 19:16:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.287 19:16:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:29.287 19:16:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.287 19:16:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:29.287 19:16:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.287 19:16:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:29.287 19:16:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.287 19:16:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:23:29.287 19:16:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:23:29.287 [2024-04-18 19:16:44.917849] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:29.287 [2024-04-18 19:16:44.918834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110961 ] 00:23:29.287 [2024-04-18 19:16:45.089428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.546 [2024-04-18 19:16:45.388019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.546 [2024-04-18 19:16:45.388355] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:29.546 [2024-04-18 19:16:45.388471] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:29.546 [2024-04-18 19:16:45.388572] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:30.112 19:16:45 -- common/autotest_common.sh@641 -- # es=234 00:23:30.112 19:16:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:30.112 19:16:45 -- common/autotest_common.sh@650 -- # es=106 00:23:30.112 19:16:45 -- common/autotest_common.sh@651 -- # case "$es" in 00:23:30.112 19:16:45 -- common/autotest_common.sh@658 -- # es=1 00:23:30.112 19:16:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:30.112 19:16:45 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:30.112 19:16:45 -- rpc/skip_rpc.sh@70 -- # killprocess 110926 00:23:30.112 19:16:45 -- common/autotest_common.sh@936 -- # '[' -z 110926 ']' 00:23:30.112 19:16:45 -- common/autotest_common.sh@940 -- # kill -0 110926 00:23:30.112 19:16:45 -- common/autotest_common.sh@941 -- # uname 00:23:30.112 19:16:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:30.112 19:16:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110926 00:23:30.112 19:16:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:30.112 19:16:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:30.112 19:16:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110926' 00:23:30.112 killing process with pid 110926 00:23:30.112 19:16:45 -- common/autotest_common.sh@955 -- # kill 110926 00:23:30.112 19:16:45 -- common/autotest_common.sh@960 -- # wait 110926 00:23:33.420 ************************************ 00:23:33.420 END TEST exit_on_failed_rpc_init 00:23:33.420 ************************************ 00:23:33.420 00:23:33.420 real 0m5.408s 00:23:33.420 user 0m6.272s 00:23:33.420 sys 0m0.585s 00:23:33.420 19:16:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.420 19:16:48 -- common/autotest_common.sh@10 -- # set +x 00:23:33.420 19:16:48 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:33.420 ************************************ 00:23:33.420 END TEST skip_rpc 00:23:33.420 ************************************ 00:23:33.420 00:23:33.420 real 0m25.597s 00:23:33.420 user 0m25.293s 00:23:33.420 sys 0m2.076s 00:23:33.420 19:16:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.420 19:16:48 -- common/autotest_common.sh@10 -- # set +x 00:23:33.420 19:16:48 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:23:33.420 19:16:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:33.420 19:16:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.420 19:16:48 -- common/autotest_common.sh@10 -- # set +x 00:23:33.420 ************************************ 00:23:33.420 START TEST rpc_client 00:23:33.420 ************************************ 00:23:33.420 19:16:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:23:33.420 * Looking for test storage... 00:23:33.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:23:33.420 19:16:48 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:23:33.420 OK 00:23:33.420 19:16:48 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:23:33.420 ************************************ 00:23:33.420 END TEST rpc_client 00:23:33.420 ************************************ 00:23:33.420 00:23:33.420 real 0m0.178s 00:23:33.420 user 0m0.079s 00:23:33.420 sys 0m0.110s 00:23:33.420 19:16:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.420 19:16:48 -- common/autotest_common.sh@10 -- # set +x 00:23:33.420 19:16:48 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:23:33.420 19:16:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:33.420 19:16:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.420 19:16:48 -- common/autotest_common.sh@10 -- # set +x 00:23:33.420 ************************************ 00:23:33.420 START TEST json_config 00:23:33.420 ************************************ 00:23:33.420 19:16:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:23:33.420 19:16:49 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.420 19:16:49 -- nvmf/common.sh@7 -- # uname -s 00:23:33.420 19:16:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.420 19:16:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.420 19:16:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.420 19:16:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.420 19:16:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.420 19:16:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.420 19:16:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.420 19:16:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.420 19:16:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.420 19:16:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.420 19:16:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1402161d-a1b4-4e7c-9fdb-5188873295c0 00:23:33.420 19:16:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=1402161d-a1b4-4e7c-9fdb-5188873295c0 00:23:33.420 19:16:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.420 19:16:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.420 19:16:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:33.420 19:16:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.420 19:16:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.420 19:16:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.420 19:16:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.420 19:16:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.420 19:16:49 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:33.420 19:16:49 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:33.420 19:16:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:33.420 19:16:49 -- paths/export.sh@5 -- # export PATH 00:23:33.420 19:16:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:33.420 19:16:49 -- nvmf/common.sh@47 -- # : 0 00:23:33.420 19:16:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.420 19:16:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.420 19:16:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.420 19:16:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.420 19:16:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.420 19:16:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.420 19:16:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.420 19:16:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.420 19:16:49 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:23:33.420 19:16:49 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:23:33.420 19:16:49 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:23:33.420 19:16:49 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:23:33.420 19:16:49 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:23:33.420 19:16:49 -- json_config/json_config.sh@31 -- # app_pid=([target]="" [initiator]="") 00:23:33.420 19:16:49 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:23:33.420 19:16:49 -- json_config/json_config.sh@32 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:23:33.420 19:16:49 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:23:33.421 19:16:49 -- json_config/json_config.sh@33 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:23:33.421 19:16:49 -- json_config/json_config.sh@33 -- # declare -A app_params 00:23:33.421 19:16:49 -- json_config/json_config.sh@34 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:23:33.421 19:16:49 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:23:33.421 19:16:49 -- json_config/json_config.sh@40 -- # last_event_id=0 00:23:33.421 19:16:49 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:23:33.421 INFO: JSON configuration test init 00:23:33.421 19:16:49 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:23:33.421 19:16:49 -- json_config/json_config.sh@357 -- # json_config_test_init 00:23:33.421 19:16:49 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:23:33.421 19:16:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:33.421 19:16:49 -- common/autotest_common.sh@10 -- # set +x 00:23:33.421 19:16:49 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:23:33.421 19:16:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:33.421 19:16:49 -- common/autotest_common.sh@10 -- # set +x 00:23:33.421 19:16:49 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:23:33.421 19:16:49 -- json_config/common.sh@9 -- # local app=target 00:23:33.421 19:16:49 -- json_config/common.sh@10 -- # shift 00:23:33.421 19:16:49 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:23:33.421 19:16:49 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:23:33.421 19:16:49 -- json_config/common.sh@15 -- # local app_extra_params= 00:23:33.421 19:16:49 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:33.421 19:16:49 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:33.421 19:16:49 -- json_config/common.sh@22 -- # app_pid["$app"]=111149 00:23:33.421 Waiting for target to run... 00:23:33.421 19:16:49 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:23:33.421 19:16:49 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:23:33.421 19:16:49 -- json_config/common.sh@25 -- # waitforlisten 111149 /var/tmp/spdk_tgt.sock 00:23:33.421 19:16:49 -- common/autotest_common.sh@817 -- # '[' -z 111149 ']' 00:23:33.421 19:16:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:23:33.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:23:33.421 19:16:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:33.421 19:16:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:23:33.421 19:16:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:33.421 19:16:49 -- common/autotest_common.sh@10 -- # set +x 00:23:33.421 [2024-04-18 19:16:49.171053] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:33.421 [2024-04-18 19:16:49.171209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111149 ] 00:23:33.680 [2024-04-18 19:16:49.570758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.937 [2024-04-18 19:16:49.855612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.195 00:23:34.195 19:16:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:34.195 19:16:50 -- common/autotest_common.sh@850 -- # return 0 00:23:34.195 19:16:50 -- json_config/common.sh@26 -- # echo '' 00:23:34.195 19:16:50 -- json_config/json_config.sh@269 -- # create_accel_config 00:23:34.195 19:16:50 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:23:34.195 19:16:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:34.195 19:16:50 -- common/autotest_common.sh@10 -- # set +x 00:23:34.195 19:16:50 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:23:34.195 19:16:50 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:23:34.195 19:16:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:34.195 19:16:50 -- common/autotest_common.sh@10 -- # set +x 00:23:34.452 19:16:50 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:23:34.452 19:16:50 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:23:34.452 19:16:50 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:23:35.385 19:16:51 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:23:35.385 19:16:51 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:23:35.643 19:16:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:35.643 19:16:51 -- common/autotest_common.sh@10 -- # set +x 00:23:35.643 19:16:51 -- json_config/json_config.sh@45 -- # local ret=0 00:23:35.643 19:16:51 -- json_config/json_config.sh@46 -- # enabled_types=("bdev_register" "bdev_unregister") 00:23:35.643 19:16:51 -- json_config/json_config.sh@46 -- # local enabled_types 00:23:35.643 19:16:51 -- json_config/json_config.sh@48 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:23:35.643 19:16:51 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:23:35.643 19:16:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:23:35.643 19:16:51 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:23:35.901 19:16:51 -- json_config/json_config.sh@48 -- # local get_types 00:23:35.901 19:16:51 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:23:35.901 19:16:51 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:23:35.901 19:16:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:35.901 19:16:51 -- common/autotest_common.sh@10 -- # set +x 00:23:35.901 19:16:51 -- json_config/json_config.sh@55 -- # return 0 00:23:35.901 19:16:51 -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:23:35.901 19:16:51 -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:23:35.901 19:16:51 -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:23:35.901 19:16:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:35.901 19:16:51 -- common/autotest_common.sh@10 -- # set +x 00:23:35.901 19:16:51 -- json_config/json_config.sh@107 -- # expected_notifications=() 00:23:35.901 19:16:51 -- json_config/json_config.sh@107 -- # local expected_notifications 00:23:35.901 19:16:51 -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:23:35.901 19:16:51 -- json_config/json_config.sh@111 -- # get_notifications 00:23:35.901 19:16:51 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:23:35.901 19:16:51 -- json_config/json_config.sh@61 -- # IFS=: 00:23:35.901 19:16:51 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:35.901 19:16:51 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:23:35.901 19:16:51 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:23:35.901 19:16:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:23:36.159 19:16:51 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:23:36.159 19:16:51 -- json_config/json_config.sh@61 -- # IFS=: 00:23:36.159 19:16:51 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:36.159 19:16:51 -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:23:36.159 19:16:51 -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:23:36.159 19:16:51 -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:23:36.159 19:16:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:23:36.159 Nvme0n1p0 Nvme0n1p1 00:23:36.159 19:16:52 -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:23:36.159 19:16:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:23:36.476 [2024-04-18 19:16:52.226590] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:23:36.476 [2024-04-18 19:16:52.226686] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:23:36.476 00:23:36.476 19:16:52 -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:23:36.476 19:16:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:23:36.733 Malloc3 00:23:36.733 19:16:52 -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:23:36.733 19:16:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:23:36.991 [2024-04-18 19:16:52.669248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:23:36.991 [2024-04-18 19:16:52.669367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.991 [2024-04-18 19:16:52.669408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:36.991 [2024-04-18 19:16:52.669445] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.991 [2024-04-18 19:16:52.671943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.991 [2024-04-18 19:16:52.672008] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:23:36.991 PTBdevFromMalloc3 00:23:36.991 19:16:52 -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:23:36.991 19:16:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:23:36.991 Null0 00:23:36.991 19:16:52 -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:23:36.991 19:16:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:23:37.249 Malloc0 00:23:37.249 19:16:53 -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:23:37.249 19:16:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:23:37.507 Malloc1 00:23:37.507 19:16:53 -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:23:37.507 19:16:53 -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:23:38.074 102400+0 records in 00:23:38.074 102400+0 records out 00:23:38.074 104857600 bytes (105 MB, 100 MiB) copied, 0.480657 s, 218 MB/s 00:23:38.074 19:16:53 -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:23:38.074 19:16:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:23:38.333 aio_disk 00:23:38.333 19:16:54 -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:23:38.333 19:16:54 -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:23:38.333 19:16:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:23:38.592 70a78188-cf13-4b5e-a512-c94b3484befd 00:23:38.592 19:16:54 -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:23:38.592 19:16:54 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:23:38.592 19:16:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:23:39.157 19:16:54 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:23:39.157 19:16:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:23:39.157 19:16:55 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:23:39.157 19:16:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:23:39.415 19:16:55 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:23:39.415 19:16:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:23:39.674 19:16:55 -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:23:39.674 19:16:55 -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:23:39.674 19:16:55 -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:97bdf7f2-0f81-4751-854d-17dd564b9bf0 bdev_register:0d95d5d8-8193-4f3d-918c-80319789f0b2 bdev_register:db21bf00-31ce-484d-9e39-5fff562c42aa bdev_register:7c7d6023-c1d2-4da9-bea6-12d91ccf87ee 00:23:39.674 19:16:55 -- json_config/json_config.sh@67 -- # local events_to_check 00:23:39.674 19:16:55 -- json_config/json_config.sh@68 -- # local recorded_events 00:23:39.674 19:16:55 -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:23:39.674 19:16:55 -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:97bdf7f2-0f81-4751-854d-17dd564b9bf0 bdev_register:0d95d5d8-8193-4f3d-918c-80319789f0b2 bdev_register:db21bf00-31ce-484d-9e39-5fff562c42aa bdev_register:7c7d6023-c1d2-4da9-bea6-12d91ccf87ee 00:23:39.674 19:16:55 -- json_config/json_config.sh@71 -- # sort 00:23:39.674 19:16:55 -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:23:39.932 19:16:55 -- json_config/json_config.sh@72 -- # get_notifications 00:23:39.932 19:16:55 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:23:39.932 19:16:55 -- json_config/json_config.sh@72 -- # sort 00:23:39.932 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:39.932 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:39.932 19:16:55 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:23:39.932 19:16:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:23:39.932 19:16:55 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:97bdf7f2-0f81-4751-854d-17dd564b9bf0 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:0d95d5d8-8193-4f3d-918c-80319789f0b2 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:db21bf00-31ce-484d-9e39-5fff562c42aa 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@62 -- # echo bdev_register:7c7d6023-c1d2-4da9-bea6-12d91ccf87ee 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # IFS=: 00:23:40.191 19:16:55 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:23:40.191 19:16:55 -- json_config/json_config.sh@74 -- # [[ bdev_register:0d95d5d8-8193-4f3d-918c-80319789f0b2 bdev_register:7c7d6023-c1d2-4da9-bea6-12d91ccf87ee bdev_register:97bdf7f2-0f81-4751-854d-17dd564b9bf0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:db21bf00-31ce-484d-9e39-5fff562c42aa != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\d\9\5\d\5\d\8\-\8\1\9\3\-\4\f\3\d\-\9\1\8\c\-\8\0\3\1\9\7\8\9\f\0\b\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\c\7\d\6\0\2\3\-\c\1\d\2\-\4\d\a\9\-\b\e\a\6\-\1\2\d\9\1\c\c\f\8\7\e\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\7\b\d\f\7\f\2\-\0\f\8\1\-\4\7\5\1\-\8\5\4\d\-\1\7\d\d\5\6\4\b\9\b\f\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\b\2\1\b\f\0\0\-\3\1\c\e\-\4\8\4\d\-\9\e\3\9\-\5\f\f\f\5\6\2\c\4\2\a\a ]] 00:23:40.191 19:16:55 -- json_config/json_config.sh@86 -- # cat 00:23:40.192 19:16:55 -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:0d95d5d8-8193-4f3d-918c-80319789f0b2 bdev_register:7c7d6023-c1d2-4da9-bea6-12d91ccf87ee bdev_register:97bdf7f2-0f81-4751-854d-17dd564b9bf0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:db21bf00-31ce-484d-9e39-5fff562c42aa 00:23:40.192 Expected events matched: 00:23:40.192 bdev_register:0d95d5d8-8193-4f3d-918c-80319789f0b2 00:23:40.192 bdev_register:7c7d6023-c1d2-4da9-bea6-12d91ccf87ee 00:23:40.192 bdev_register:97bdf7f2-0f81-4751-854d-17dd564b9bf0 00:23:40.192 bdev_register:Malloc0 00:23:40.192 bdev_register:Malloc0p0 00:23:40.192 bdev_register:Malloc0p1 00:23:40.192 bdev_register:Malloc0p2 00:23:40.192 bdev_register:Malloc1 00:23:40.192 bdev_register:Malloc3 00:23:40.192 bdev_register:Null0 00:23:40.192 bdev_register:Nvme0n1 00:23:40.192 bdev_register:Nvme0n1p0 00:23:40.192 bdev_register:Nvme0n1p1 00:23:40.192 bdev_register:PTBdevFromMalloc3 00:23:40.192 bdev_register:aio_disk 00:23:40.192 bdev_register:db21bf00-31ce-484d-9e39-5fff562c42aa 00:23:40.192 19:16:55 -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:23:40.192 19:16:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:40.192 19:16:55 -- common/autotest_common.sh@10 -- # set +x 00:23:40.192 19:16:55 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:23:40.192 19:16:55 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:23:40.192 19:16:55 -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:23:40.192 19:16:55 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:23:40.192 19:16:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:40.192 19:16:55 -- common/autotest_common.sh@10 -- # set +x 00:23:40.192 19:16:55 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:23:40.192 19:16:55 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:23:40.192 19:16:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:23:40.449 MallocBdevForConfigChangeCheck 00:23:40.449 19:16:56 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:23:40.449 19:16:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:40.449 19:16:56 -- common/autotest_common.sh@10 -- # set +x 00:23:40.449 19:16:56 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:23:40.449 19:16:56 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:23:40.707 INFO: shutting down applications... 00:23:40.707 19:16:56 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:23:40.707 19:16:56 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:23:40.707 19:16:56 -- json_config/json_config.sh@368 -- # json_config_clear target 00:23:40.707 19:16:56 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:23:40.707 19:16:56 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:23:40.965 [2024-04-18 19:16:56.795437] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:23:41.224 Calling clear_vhost_scsi_subsystem 00:23:41.224 Calling clear_iscsi_subsystem 00:23:41.224 Calling clear_vhost_blk_subsystem 00:23:41.224 Calling clear_nbd_subsystem 00:23:41.224 Calling clear_nvmf_subsystem 00:23:41.224 Calling clear_bdev_subsystem 00:23:41.224 19:16:56 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:23:41.224 19:16:56 -- json_config/json_config.sh@343 -- # count=100 00:23:41.224 19:16:56 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:23:41.224 19:16:56 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:23:41.224 19:16:56 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:23:41.224 19:16:56 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:23:41.483 19:16:57 -- json_config/json_config.sh@345 -- # break 00:23:41.483 19:16:57 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:23:41.483 19:16:57 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:23:41.483 19:16:57 -- json_config/common.sh@31 -- # local app=target 00:23:41.483 19:16:57 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:23:41.483 19:16:57 -- json_config/common.sh@35 -- # [[ -n 111149 ]] 00:23:41.483 19:16:57 -- json_config/common.sh@38 -- # kill -SIGINT 111149 00:23:41.483 19:16:57 -- json_config/common.sh@40 -- # (( i = 0 )) 00:23:41.483 19:16:57 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:41.483 19:16:57 -- json_config/common.sh@41 -- # kill -0 111149 00:23:41.483 19:16:57 -- json_config/common.sh@45 -- # sleep 0.5 00:23:42.051 19:16:57 -- json_config/common.sh@40 -- # (( i++ )) 00:23:42.051 19:16:57 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:42.051 19:16:57 -- json_config/common.sh@41 -- # kill -0 111149 00:23:42.051 19:16:57 -- json_config/common.sh@45 -- # sleep 0.5 00:23:42.642 19:16:58 -- json_config/common.sh@40 -- # (( i++ )) 00:23:42.642 19:16:58 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:42.642 19:16:58 -- json_config/common.sh@41 -- # kill -0 111149 00:23:42.642 19:16:58 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:23:42.642 19:16:58 -- json_config/common.sh@43 -- # break 00:23:42.642 19:16:58 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:23:42.642 SPDK target shutdown done 00:23:42.642 19:16:58 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:23:42.642 INFO: relaunching applications... 00:23:42.642 19:16:58 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:23:42.642 19:16:58 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:42.642 19:16:58 -- json_config/common.sh@9 -- # local app=target 00:23:42.642 19:16:58 -- json_config/common.sh@10 -- # shift 00:23:42.642 19:16:58 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:23:42.642 19:16:58 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:23:42.642 19:16:58 -- json_config/common.sh@15 -- # local app_extra_params= 00:23:42.642 19:16:58 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:42.642 19:16:58 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:42.642 19:16:58 -- json_config/common.sh@22 -- # app_pid["$app"]=111424 00:23:42.642 19:16:58 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:23:42.642 Waiting for target to run... 00:23:42.642 19:16:58 -- json_config/common.sh@25 -- # waitforlisten 111424 /var/tmp/spdk_tgt.sock 00:23:42.642 19:16:58 -- common/autotest_common.sh@817 -- # '[' -z 111424 ']' 00:23:42.642 19:16:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:23:42.642 19:16:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:42.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:23:42.642 19:16:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:23:42.642 19:16:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:42.642 19:16:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.642 19:16:58 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:42.642 [2024-04-18 19:16:58.425294] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:42.642 [2024-04-18 19:16:58.425501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111424 ] 00:23:43.209 [2024-04-18 19:16:58.854918] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.209 [2024-04-18 19:16:59.086551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.143 [2024-04-18 19:16:59.998811] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:23:44.143 [2024-04-18 19:16:59.998919] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:23:44.143 [2024-04-18 19:17:00.006784] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:23:44.143 [2024-04-18 19:17:00.006834] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:23:44.143 [2024-04-18 19:17:00.014802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:23:44.143 [2024-04-18 19:17:00.014871] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:23:44.143 [2024-04-18 19:17:00.014933] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:23:44.401 [2024-04-18 19:17:00.111862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:23:44.401 [2024-04-18 19:17:00.111958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.401 [2024-04-18 19:17:00.111987] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:44.401 [2024-04-18 19:17:00.112020] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.401 [2024-04-18 19:17:00.112554] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.401 [2024-04-18 19:17:00.112596] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:23:45.404 19:17:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:45.405 19:17:01 -- common/autotest_common.sh@850 -- # return 0 00:23:45.405 00:23:45.405 19:17:01 -- json_config/common.sh@26 -- # echo '' 00:23:45.405 19:17:01 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:23:45.405 INFO: Checking if target configuration is the same... 00:23:45.405 19:17:01 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:23:45.405 19:17:01 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:45.405 19:17:01 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:23:45.405 19:17:01 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:23:45.405 + '[' 2 -ne 2 ']' 00:23:45.405 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:23:45.405 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:23:45.405 + rootdir=/home/vagrant/spdk_repo/spdk 00:23:45.405 +++ basename /dev/fd/62 00:23:45.405 ++ mktemp /tmp/62.XXX 00:23:45.405 + tmp_file_1=/tmp/62.lvW 00:23:45.405 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:45.405 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:23:45.405 + tmp_file_2=/tmp/spdk_tgt_config.json.BiZ 00:23:45.405 + ret=0 00:23:45.405 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:23:45.663 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:23:45.663 + diff -u /tmp/62.lvW /tmp/spdk_tgt_config.json.BiZ 00:23:45.663 INFO: JSON config files are the same 00:23:45.663 + echo 'INFO: JSON config files are the same' 00:23:45.663 + rm /tmp/62.lvW /tmp/spdk_tgt_config.json.BiZ 00:23:45.663 + exit 0 00:23:45.663 19:17:01 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:23:45.663 INFO: changing configuration and checking if this can be detected... 00:23:45.663 19:17:01 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:23:45.663 19:17:01 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:23:45.663 19:17:01 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:23:45.921 19:17:01 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:45.921 19:17:01 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:23:45.921 19:17:01 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:23:45.921 + '[' 2 -ne 2 ']' 00:23:45.921 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:23:45.921 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:23:45.921 + rootdir=/home/vagrant/spdk_repo/spdk 00:23:45.921 +++ basename /dev/fd/62 00:23:45.921 ++ mktemp /tmp/62.XXX 00:23:45.921 + tmp_file_1=/tmp/62.Bg0 00:23:45.921 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:45.921 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:23:45.921 + tmp_file_2=/tmp/spdk_tgt_config.json.WnD 00:23:45.921 + ret=0 00:23:45.921 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:23:46.179 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:23:46.436 + diff -u /tmp/62.Bg0 /tmp/spdk_tgt_config.json.WnD 00:23:46.436 + ret=1 00:23:46.436 + echo '=== Start of file: /tmp/62.Bg0 ===' 00:23:46.436 + cat /tmp/62.Bg0 00:23:46.436 + echo '=== End of file: /tmp/62.Bg0 ===' 00:23:46.436 + echo '' 00:23:46.436 + echo '=== Start of file: /tmp/spdk_tgt_config.json.WnD ===' 00:23:46.436 + cat /tmp/spdk_tgt_config.json.WnD 00:23:46.436 + echo '=== End of file: /tmp/spdk_tgt_config.json.WnD ===' 00:23:46.436 + echo '' 00:23:46.437 + rm /tmp/62.Bg0 /tmp/spdk_tgt_config.json.WnD 00:23:46.437 + exit 1 00:23:46.437 19:17:02 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:23:46.437 INFO: configuration change detected. 00:23:46.437 19:17:02 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:23:46.437 19:17:02 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:23:46.437 19:17:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:46.437 19:17:02 -- common/autotest_common.sh@10 -- # set +x 00:23:46.437 19:17:02 -- json_config/json_config.sh@307 -- # local ret=0 00:23:46.437 19:17:02 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:23:46.437 19:17:02 -- json_config/json_config.sh@317 -- # [[ -n 111424 ]] 00:23:46.437 19:17:02 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:23:46.437 19:17:02 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:23:46.437 19:17:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:46.437 19:17:02 -- common/autotest_common.sh@10 -- # set +x 00:23:46.437 19:17:02 -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:23:46.437 19:17:02 -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:23:46.437 19:17:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:23:46.695 19:17:02 -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:23:46.695 19:17:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:23:46.953 19:17:02 -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:23:46.953 19:17:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:23:47.211 19:17:02 -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:23:47.211 19:17:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:23:47.468 19:17:03 -- json_config/json_config.sh@193 -- # uname -s 00:23:47.469 19:17:03 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:23:47.469 19:17:03 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:23:47.469 19:17:03 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:23:47.469 19:17:03 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:23:47.469 19:17:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:47.469 19:17:03 -- common/autotest_common.sh@10 -- # set +x 00:23:47.469 19:17:03 -- json_config/json_config.sh@323 -- # killprocess 111424 00:23:47.469 19:17:03 -- common/autotest_common.sh@936 -- # '[' -z 111424 ']' 00:23:47.469 19:17:03 -- common/autotest_common.sh@940 -- # kill -0 111424 00:23:47.469 19:17:03 -- common/autotest_common.sh@941 -- # uname 00:23:47.469 19:17:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:47.469 19:17:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111424 00:23:47.469 killing process with pid 111424 00:23:47.469 19:17:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:47.469 19:17:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:47.469 19:17:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111424' 00:23:47.469 19:17:03 -- common/autotest_common.sh@955 -- # kill 111424 00:23:47.469 19:17:03 -- common/autotest_common.sh@960 -- # wait 111424 00:23:48.403 19:17:04 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:48.403 19:17:04 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:23:48.403 19:17:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:48.403 19:17:04 -- common/autotest_common.sh@10 -- # set +x 00:23:48.662 19:17:04 -- json_config/json_config.sh@328 -- # return 0 00:23:48.662 INFO: Success 00:23:48.662 19:17:04 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:23:48.662 00:23:48.662 real 0m15.366s 00:23:48.662 user 0m21.711s 00:23:48.662 sys 0m2.640s 00:23:48.662 19:17:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:48.662 19:17:04 -- common/autotest_common.sh@10 -- # set +x 00:23:48.662 ************************************ 00:23:48.662 END TEST json_config 00:23:48.662 ************************************ 00:23:48.662 19:17:04 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:23:48.662 19:17:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:48.662 19:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:48.662 19:17:04 -- common/autotest_common.sh@10 -- # set +x 00:23:48.662 ************************************ 00:23:48.662 START TEST json_config_extra_key 00:23:48.662 ************************************ 00:23:48.662 19:17:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:23:48.662 19:17:04 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:48.662 19:17:04 -- nvmf/common.sh@7 -- # uname -s 00:23:48.662 19:17:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.662 19:17:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.662 19:17:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.662 19:17:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.662 19:17:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.662 19:17:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.662 19:17:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.662 19:17:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.662 19:17:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.662 19:17:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.662 19:17:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4f2efc95-cfc8-4fa8-9b1c-526b1dbcbd8b 00:23:48.662 19:17:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=4f2efc95-cfc8-4fa8-9b1c-526b1dbcbd8b 00:23:48.662 19:17:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.662 19:17:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.662 19:17:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:48.662 19:17:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.662 19:17:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:48.662 19:17:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.662 19:17:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.662 19:17:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.662 19:17:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:48.662 19:17:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:48.662 19:17:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:48.662 19:17:04 -- paths/export.sh@5 -- # export PATH 00:23:48.662 19:17:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:48.662 19:17:04 -- nvmf/common.sh@47 -- # : 0 00:23:48.662 19:17:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.662 19:17:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.662 19:17:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.662 19:17:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.662 19:17:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.662 19:17:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.662 19:17:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.662 19:17:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.662 19:17:04 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@17 -- # app_pid=([target]="") 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@18 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@19 -- # app_params=([target]='-m 0x1 -s 1024') 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@20 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:23:48.663 INFO: launching applications... 00:23:48.663 19:17:04 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:23:48.663 19:17:04 -- json_config/common.sh@9 -- # local app=target 00:23:48.663 19:17:04 -- json_config/common.sh@10 -- # shift 00:23:48.663 19:17:04 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:23:48.663 19:17:04 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:23:48.663 19:17:04 -- json_config/common.sh@15 -- # local app_extra_params= 00:23:48.663 19:17:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:48.663 19:17:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:48.663 19:17:04 -- json_config/common.sh@22 -- # app_pid["$app"]=111645 00:23:48.663 19:17:04 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:23:48.663 Waiting for target to run... 00:23:48.663 19:17:04 -- json_config/common.sh@25 -- # waitforlisten 111645 /var/tmp/spdk_tgt.sock 00:23:48.663 19:17:04 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:23:48.663 19:17:04 -- common/autotest_common.sh@817 -- # '[' -z 111645 ']' 00:23:48.663 19:17:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:23:48.663 19:17:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:48.663 19:17:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:23:48.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:23:48.663 19:17:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:48.663 19:17:04 -- common/autotest_common.sh@10 -- # set +x 00:23:48.921 [2024-04-18 19:17:04.641972] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:48.921 [2024-04-18 19:17:04.642186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111645 ] 00:23:49.180 [2024-04-18 19:17:05.066911] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.438 [2024-04-18 19:17:05.276196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.372 19:17:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:50.372 19:17:06 -- common/autotest_common.sh@850 -- # return 0 00:23:50.372 19:17:06 -- json_config/common.sh@26 -- # echo '' 00:23:50.372 00:23:50.372 INFO: shutting down applications... 00:23:50.372 19:17:06 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:23:50.372 19:17:06 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:23:50.372 19:17:06 -- json_config/common.sh@31 -- # local app=target 00:23:50.372 19:17:06 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:23:50.372 19:17:06 -- json_config/common.sh@35 -- # [[ -n 111645 ]] 00:23:50.372 19:17:06 -- json_config/common.sh@38 -- # kill -SIGINT 111645 00:23:50.372 19:17:06 -- json_config/common.sh@40 -- # (( i = 0 )) 00:23:50.372 19:17:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:50.372 19:17:06 -- json_config/common.sh@41 -- # kill -0 111645 00:23:50.372 19:17:06 -- json_config/common.sh@45 -- # sleep 0.5 00:23:50.938 19:17:06 -- json_config/common.sh@40 -- # (( i++ )) 00:23:50.938 19:17:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:50.938 19:17:06 -- json_config/common.sh@41 -- # kill -0 111645 00:23:50.938 19:17:06 -- json_config/common.sh@45 -- # sleep 0.5 00:23:51.505 19:17:07 -- json_config/common.sh@40 -- # (( i++ )) 00:23:51.505 19:17:07 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:51.505 19:17:07 -- json_config/common.sh@41 -- # kill -0 111645 00:23:51.505 19:17:07 -- json_config/common.sh@45 -- # sleep 0.5 00:23:51.764 19:17:07 -- json_config/common.sh@40 -- # (( i++ )) 00:23:51.764 19:17:07 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:51.764 19:17:07 -- json_config/common.sh@41 -- # kill -0 111645 00:23:51.764 19:17:07 -- json_config/common.sh@45 -- # sleep 0.5 00:23:52.331 19:17:08 -- json_config/common.sh@40 -- # (( i++ )) 00:23:52.331 19:17:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:52.331 19:17:08 -- json_config/common.sh@41 -- # kill -0 111645 00:23:52.331 19:17:08 -- json_config/common.sh@45 -- # sleep 0.5 00:23:52.897 19:17:08 -- json_config/common.sh@40 -- # (( i++ )) 00:23:52.897 19:17:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:52.897 19:17:08 -- json_config/common.sh@41 -- # kill -0 111645 00:23:52.897 19:17:08 -- json_config/common.sh@45 -- # sleep 0.5 00:23:53.464 19:17:09 -- json_config/common.sh@40 -- # (( i++ )) 00:23:53.464 19:17:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:53.464 19:17:09 -- json_config/common.sh@41 -- # kill -0 111645 00:23:53.464 19:17:09 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:23:53.464 19:17:09 -- json_config/common.sh@43 -- # break 00:23:53.464 SPDK target shutdown done 00:23:53.464 19:17:09 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:23:53.464 19:17:09 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:23:53.464 Success 00:23:53.464 19:17:09 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:23:53.464 00:23:53.464 real 0m4.722s 00:23:53.464 user 0m4.566s 00:23:53.464 sys 0m0.544s 00:23:53.464 19:17:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:53.464 ************************************ 00:23:53.464 END TEST json_config_extra_key 00:23:53.464 ************************************ 00:23:53.464 19:17:09 -- common/autotest_common.sh@10 -- # set +x 00:23:53.464 19:17:09 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:53.464 19:17:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:53.464 19:17:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:53.464 19:17:09 -- common/autotest_common.sh@10 -- # set +x 00:23:53.464 ************************************ 00:23:53.464 START TEST alias_rpc 00:23:53.464 ************************************ 00:23:53.464 19:17:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:53.464 * Looking for test storage... 00:23:53.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:23:53.464 19:17:09 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:23:53.464 19:17:09 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=111767 00:23:53.464 19:17:09 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.464 19:17:09 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 111767 00:23:53.464 19:17:09 -- common/autotest_common.sh@817 -- # '[' -z 111767 ']' 00:23:53.464 19:17:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.464 19:17:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:53.464 19:17:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.464 19:17:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:53.464 19:17:09 -- common/autotest_common.sh@10 -- # set +x 00:23:53.722 [2024-04-18 19:17:09.454437] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:53.722 [2024-04-18 19:17:09.454644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111767 ] 00:23:53.722 [2024-04-18 19:17:09.634003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.289 [2024-04-18 19:17:09.936703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.223 19:17:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:55.223 19:17:10 -- common/autotest_common.sh@850 -- # return 0 00:23:55.223 19:17:10 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:23:55.481 19:17:11 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 111767 00:23:55.481 19:17:11 -- common/autotest_common.sh@936 -- # '[' -z 111767 ']' 00:23:55.481 19:17:11 -- common/autotest_common.sh@940 -- # kill -0 111767 00:23:55.481 19:17:11 -- common/autotest_common.sh@941 -- # uname 00:23:55.481 19:17:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:55.481 19:17:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111767 00:23:55.481 19:17:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:55.481 19:17:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:55.481 19:17:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111767' 00:23:55.481 killing process with pid 111767 00:23:55.481 19:17:11 -- common/autotest_common.sh@955 -- # kill 111767 00:23:55.482 19:17:11 -- common/autotest_common.sh@960 -- # wait 111767 00:23:58.068 ************************************ 00:23:58.068 END TEST alias_rpc 00:23:58.068 ************************************ 00:23:58.068 00:23:58.068 real 0m4.431s 00:23:58.068 user 0m4.655s 00:23:58.068 sys 0m0.504s 00:23:58.068 19:17:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:58.068 19:17:13 -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 19:17:13 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:23:58.068 19:17:13 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:23:58.068 19:17:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:58.068 19:17:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:58.068 19:17:13 -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 ************************************ 00:23:58.068 START TEST spdkcli_tcp 00:23:58.068 ************************************ 00:23:58.068 19:17:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:23:58.068 * Looking for test storage... 00:23:58.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:58.068 19:17:13 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:58.068 19:17:13 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:58.068 19:17:13 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:58.068 19:17:13 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:23:58.068 19:17:13 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:23:58.068 19:17:13 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.068 19:17:13 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:23:58.068 19:17:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:58.068 19:17:13 -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 19:17:13 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=111901 00:23:58.068 19:17:13 -- spdkcli/tcp.sh@27 -- # waitforlisten 111901 00:23:58.068 19:17:13 -- common/autotest_common.sh@817 -- # '[' -z 111901 ']' 00:23:58.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.068 19:17:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.068 19:17:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:58.068 19:17:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.068 19:17:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:58.068 19:17:13 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:58.068 19:17:13 -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 [2024-04-18 19:17:13.989590] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:23:58.068 [2024-04-18 19:17:13.989875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111901 ] 00:23:58.325 [2024-04-18 19:17:14.169909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:58.583 [2024-04-18 19:17:14.396461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.583 [2024-04-18 19:17:14.396468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.515 19:17:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:59.515 19:17:15 -- common/autotest_common.sh@850 -- # return 0 00:23:59.515 19:17:15 -- spdkcli/tcp.sh@31 -- # socat_pid=111930 00:23:59.515 19:17:15 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:23:59.515 19:17:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:23:59.775 [ 00:23:59.775 "spdk_get_version", 00:23:59.775 "rpc_get_methods", 00:23:59.775 "keyring_get_keys", 00:23:59.775 "trace_get_info", 00:23:59.775 "trace_get_tpoint_group_mask", 00:23:59.775 "trace_disable_tpoint_group", 00:23:59.775 "trace_enable_tpoint_group", 00:23:59.775 "trace_clear_tpoint_mask", 00:23:59.775 "trace_set_tpoint_mask", 00:23:59.775 "framework_get_pci_devices", 00:23:59.775 "framework_get_config", 00:23:59.775 "framework_get_subsystems", 00:23:59.775 "iobuf_get_stats", 00:23:59.775 "iobuf_set_options", 00:23:59.775 "sock_set_default_impl", 00:23:59.775 "sock_impl_set_options", 00:23:59.775 "sock_impl_get_options", 00:23:59.775 "vmd_rescan", 00:23:59.775 "vmd_remove_device", 00:23:59.775 "vmd_enable", 00:23:59.775 "accel_get_stats", 00:23:59.775 "accel_set_options", 00:23:59.775 "accel_set_driver", 00:23:59.775 "accel_crypto_key_destroy", 00:23:59.775 "accel_crypto_keys_get", 00:23:59.775 "accel_crypto_key_create", 00:23:59.775 "accel_assign_opc", 00:23:59.775 "accel_get_module_info", 00:23:59.775 "accel_get_opc_assignments", 00:23:59.775 "notify_get_notifications", 00:23:59.775 "notify_get_types", 00:23:59.775 "bdev_get_histogram", 00:23:59.775 "bdev_enable_histogram", 00:23:59.775 "bdev_set_qos_limit", 00:23:59.775 "bdev_set_qd_sampling_period", 00:23:59.775 "bdev_get_bdevs", 00:23:59.775 "bdev_reset_iostat", 00:23:59.775 "bdev_get_iostat", 00:23:59.775 "bdev_examine", 00:23:59.775 "bdev_wait_for_examine", 00:23:59.775 "bdev_set_options", 00:23:59.775 "scsi_get_devices", 00:23:59.775 "thread_set_cpumask", 00:23:59.775 "framework_get_scheduler", 00:23:59.775 "framework_set_scheduler", 00:23:59.775 "framework_get_reactors", 00:23:59.775 "thread_get_io_channels", 00:23:59.775 "thread_get_pollers", 00:23:59.775 "thread_get_stats", 00:23:59.775 "framework_monitor_context_switch", 00:23:59.775 "spdk_kill_instance", 00:23:59.775 "log_enable_timestamps", 00:23:59.775 "log_get_flags", 00:23:59.775 "log_clear_flag", 00:23:59.775 "log_set_flag", 00:23:59.775 "log_get_level", 00:23:59.775 "log_set_level", 00:23:59.775 "log_get_print_level", 00:23:59.775 "log_set_print_level", 00:23:59.775 "framework_enable_cpumask_locks", 00:23:59.775 "framework_disable_cpumask_locks", 00:23:59.775 "framework_wait_init", 00:23:59.775 "framework_start_init", 00:23:59.775 "virtio_blk_create_transport", 00:23:59.775 "virtio_blk_get_transports", 00:23:59.775 "vhost_controller_set_coalescing", 00:23:59.775 "vhost_get_controllers", 00:23:59.775 "vhost_delete_controller", 00:23:59.775 "vhost_create_blk_controller", 00:23:59.775 "vhost_scsi_controller_remove_target", 00:23:59.775 "vhost_scsi_controller_add_target", 00:23:59.775 "vhost_start_scsi_controller", 00:23:59.775 "vhost_create_scsi_controller", 00:23:59.775 "nbd_get_disks", 00:23:59.775 "nbd_stop_disk", 00:23:59.775 "nbd_start_disk", 00:23:59.775 "env_dpdk_get_mem_stats", 00:23:59.776 "nvmf_subsystem_get_listeners", 00:23:59.776 "nvmf_subsystem_get_qpairs", 00:23:59.776 "nvmf_subsystem_get_controllers", 00:23:59.776 "nvmf_get_stats", 00:23:59.776 "nvmf_get_transports", 00:23:59.776 "nvmf_create_transport", 00:23:59.776 "nvmf_get_targets", 00:23:59.776 "nvmf_delete_target", 00:23:59.776 "nvmf_create_target", 00:23:59.776 "nvmf_subsystem_allow_any_host", 00:23:59.776 "nvmf_subsystem_remove_host", 00:23:59.776 "nvmf_subsystem_add_host", 00:23:59.776 "nvmf_ns_remove_host", 00:23:59.776 "nvmf_ns_add_host", 00:23:59.776 "nvmf_subsystem_remove_ns", 00:23:59.776 "nvmf_subsystem_add_ns", 00:23:59.776 "nvmf_subsystem_listener_set_ana_state", 00:23:59.776 "nvmf_discovery_get_referrals", 00:23:59.776 "nvmf_discovery_remove_referral", 00:23:59.776 "nvmf_discovery_add_referral", 00:23:59.776 "nvmf_subsystem_remove_listener", 00:23:59.776 "nvmf_subsystem_add_listener", 00:23:59.776 "nvmf_delete_subsystem", 00:23:59.776 "nvmf_create_subsystem", 00:23:59.776 "nvmf_get_subsystems", 00:23:59.776 "nvmf_set_crdt", 00:23:59.776 "nvmf_set_config", 00:23:59.776 "nvmf_set_max_subsystems", 00:23:59.776 "iscsi_set_options", 00:23:59.776 "iscsi_get_auth_groups", 00:23:59.776 "iscsi_auth_group_remove_secret", 00:23:59.776 "iscsi_auth_group_add_secret", 00:23:59.776 "iscsi_delete_auth_group", 00:23:59.776 "iscsi_create_auth_group", 00:23:59.776 "iscsi_set_discovery_auth", 00:23:59.776 "iscsi_get_options", 00:23:59.776 "iscsi_target_node_request_logout", 00:23:59.776 "iscsi_target_node_set_redirect", 00:23:59.776 "iscsi_target_node_set_auth", 00:23:59.776 "iscsi_target_node_add_lun", 00:23:59.776 "iscsi_get_stats", 00:23:59.776 "iscsi_get_connections", 00:23:59.776 "iscsi_portal_group_set_auth", 00:23:59.776 "iscsi_start_portal_group", 00:23:59.776 "iscsi_delete_portal_group", 00:23:59.776 "iscsi_create_portal_group", 00:23:59.776 "iscsi_get_portal_groups", 00:23:59.776 "iscsi_delete_target_node", 00:23:59.776 "iscsi_target_node_remove_pg_ig_maps", 00:23:59.776 "iscsi_target_node_add_pg_ig_maps", 00:23:59.776 "iscsi_create_target_node", 00:23:59.776 "iscsi_get_target_nodes", 00:23:59.776 "iscsi_delete_initiator_group", 00:23:59.776 "iscsi_initiator_group_remove_initiators", 00:23:59.776 "iscsi_initiator_group_add_initiators", 00:23:59.776 "iscsi_create_initiator_group", 00:23:59.776 "iscsi_get_initiator_groups", 00:23:59.776 "keyring_linux_set_options", 00:23:59.776 "keyring_file_remove_key", 00:23:59.776 "keyring_file_add_key", 00:23:59.776 "iaa_scan_accel_module", 00:23:59.776 "dsa_scan_accel_module", 00:23:59.776 "ioat_scan_accel_module", 00:23:59.776 "accel_error_inject_error", 00:23:59.776 "bdev_iscsi_delete", 00:23:59.776 "bdev_iscsi_create", 00:23:59.776 "bdev_iscsi_set_options", 00:23:59.776 "bdev_virtio_attach_controller", 00:23:59.776 "bdev_virtio_scsi_get_devices", 00:23:59.776 "bdev_virtio_detach_controller", 00:23:59.776 "bdev_virtio_blk_set_hotplug", 00:23:59.776 "bdev_ftl_set_property", 00:23:59.776 "bdev_ftl_get_properties", 00:23:59.776 "bdev_ftl_get_stats", 00:23:59.776 "bdev_ftl_unmap", 00:23:59.776 "bdev_ftl_unload", 00:23:59.776 "bdev_ftl_delete", 00:23:59.776 "bdev_ftl_load", 00:23:59.776 "bdev_ftl_create", 00:23:59.776 "bdev_aio_delete", 00:23:59.776 "bdev_aio_rescan", 00:23:59.776 "bdev_aio_create", 00:23:59.776 "blobfs_create", 00:23:59.776 "blobfs_detect", 00:23:59.776 "blobfs_set_cache_size", 00:23:59.776 "bdev_zone_block_delete", 00:23:59.776 "bdev_zone_block_create", 00:23:59.776 "bdev_delay_delete", 00:23:59.776 "bdev_delay_create", 00:23:59.776 "bdev_delay_update_latency", 00:23:59.776 "bdev_split_delete", 00:23:59.776 "bdev_split_create", 00:23:59.776 "bdev_error_inject_error", 00:23:59.776 "bdev_error_delete", 00:23:59.776 "bdev_error_create", 00:23:59.776 "bdev_raid_set_options", 00:23:59.776 "bdev_raid_remove_base_bdev", 00:23:59.776 "bdev_raid_add_base_bdev", 00:23:59.776 "bdev_raid_delete", 00:23:59.776 "bdev_raid_create", 00:23:59.776 "bdev_raid_get_bdevs", 00:23:59.776 "bdev_lvol_grow_lvstore", 00:23:59.776 "bdev_lvol_get_lvols", 00:23:59.776 "bdev_lvol_get_lvstores", 00:23:59.776 "bdev_lvol_delete", 00:23:59.776 "bdev_lvol_set_read_only", 00:23:59.776 "bdev_lvol_resize", 00:23:59.776 "bdev_lvol_decouple_parent", 00:23:59.776 "bdev_lvol_inflate", 00:23:59.776 "bdev_lvol_rename", 00:23:59.776 "bdev_lvol_clone_bdev", 00:23:59.776 "bdev_lvol_clone", 00:23:59.776 "bdev_lvol_snapshot", 00:23:59.776 "bdev_lvol_create", 00:23:59.776 "bdev_lvol_delete_lvstore", 00:23:59.776 "bdev_lvol_rename_lvstore", 00:23:59.776 "bdev_lvol_create_lvstore", 00:23:59.776 "bdev_passthru_delete", 00:23:59.776 "bdev_passthru_create", 00:23:59.776 "bdev_nvme_cuse_unregister", 00:23:59.776 "bdev_nvme_cuse_register", 00:23:59.776 "bdev_opal_new_user", 00:23:59.776 "bdev_opal_set_lock_state", 00:23:59.776 "bdev_opal_delete", 00:23:59.776 "bdev_opal_get_info", 00:23:59.776 "bdev_opal_create", 00:23:59.776 "bdev_nvme_opal_revert", 00:23:59.776 "bdev_nvme_opal_init", 00:23:59.776 "bdev_nvme_send_cmd", 00:23:59.776 "bdev_nvme_get_path_iostat", 00:23:59.776 "bdev_nvme_get_mdns_discovery_info", 00:23:59.776 "bdev_nvme_stop_mdns_discovery", 00:23:59.776 "bdev_nvme_start_mdns_discovery", 00:23:59.776 "bdev_nvme_set_multipath_policy", 00:23:59.776 "bdev_nvme_set_preferred_path", 00:23:59.776 "bdev_nvme_get_io_paths", 00:23:59.776 "bdev_nvme_remove_error_injection", 00:23:59.776 "bdev_nvme_add_error_injection", 00:23:59.776 "bdev_nvme_get_discovery_info", 00:23:59.776 "bdev_nvme_stop_discovery", 00:23:59.776 "bdev_nvme_start_discovery", 00:23:59.776 "bdev_nvme_get_controller_health_info", 00:23:59.776 "bdev_nvme_disable_controller", 00:23:59.776 "bdev_nvme_enable_controller", 00:23:59.776 "bdev_nvme_reset_controller", 00:23:59.776 "bdev_nvme_get_transport_statistics", 00:23:59.776 "bdev_nvme_apply_firmware", 00:23:59.776 "bdev_nvme_detach_controller", 00:23:59.776 "bdev_nvme_get_controllers", 00:23:59.776 "bdev_nvme_attach_controller", 00:23:59.776 "bdev_nvme_set_hotplug", 00:23:59.776 "bdev_nvme_set_options", 00:23:59.776 "bdev_null_resize", 00:23:59.776 "bdev_null_delete", 00:23:59.776 "bdev_null_create", 00:23:59.776 "bdev_malloc_delete", 00:23:59.776 "bdev_malloc_create" 00:23:59.776 ] 00:23:59.776 19:17:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:23:59.776 19:17:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:59.776 19:17:15 -- common/autotest_common.sh@10 -- # set +x 00:24:00.034 19:17:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:00.034 19:17:15 -- spdkcli/tcp.sh@38 -- # killprocess 111901 00:24:00.034 19:17:15 -- common/autotest_common.sh@936 -- # '[' -z 111901 ']' 00:24:00.034 19:17:15 -- common/autotest_common.sh@940 -- # kill -0 111901 00:24:00.034 19:17:15 -- common/autotest_common.sh@941 -- # uname 00:24:00.034 19:17:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:00.034 19:17:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111901 00:24:00.034 19:17:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:00.034 killing process with pid 111901 00:24:00.034 19:17:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:00.034 19:17:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111901' 00:24:00.034 19:17:15 -- common/autotest_common.sh@955 -- # kill 111901 00:24:00.034 19:17:15 -- common/autotest_common.sh@960 -- # wait 111901 00:24:02.591 00:24:02.591 real 0m4.657s 00:24:02.591 user 0m8.388s 00:24:02.591 sys 0m0.536s 00:24:02.591 ************************************ 00:24:02.591 END TEST spdkcli_tcp 00:24:02.591 ************************************ 00:24:02.591 19:17:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:02.591 19:17:18 -- common/autotest_common.sh@10 -- # set +x 00:24:02.591 19:17:18 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:24:02.591 19:17:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:02.591 19:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.591 19:17:18 -- common/autotest_common.sh@10 -- # set +x 00:24:02.867 ************************************ 00:24:02.867 START TEST dpdk_mem_utility 00:24:02.867 ************************************ 00:24:02.867 19:17:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:24:02.867 * Looking for test storage... 00:24:02.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:24:02.867 19:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:24:02.867 19:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=112039 00:24:02.867 19:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:02.867 19:17:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 112039 00:24:02.867 19:17:18 -- common/autotest_common.sh@817 -- # '[' -z 112039 ']' 00:24:02.867 19:17:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.867 19:17:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:02.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.867 19:17:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.867 19:17:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:02.867 19:17:18 -- common/autotest_common.sh@10 -- # set +x 00:24:02.867 [2024-04-18 19:17:18.736695] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:02.867 [2024-04-18 19:17:18.736890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112039 ] 00:24:03.167 [2024-04-18 19:17:18.924743] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.429 [2024-04-18 19:17:19.210332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.388 19:17:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:04.388 19:17:20 -- common/autotest_common.sh@850 -- # return 0 00:24:04.388 19:17:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:24:04.388 19:17:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:24:04.388 19:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.388 19:17:20 -- common/autotest_common.sh@10 -- # set +x 00:24:04.388 { 00:24:04.388 "filename": "/tmp/spdk_mem_dump.txt" 00:24:04.388 } 00:24:04.388 19:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.388 19:17:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:24:04.651 DPDK memory size 820.000000 MiB in 1 heap(s) 00:24:04.651 1 heaps totaling size 820.000000 MiB 00:24:04.651 size: 820.000000 MiB heap id: 0 00:24:04.651 end heaps---------- 00:24:04.651 8 mempools totaling size 598.116089 MiB 00:24:04.651 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:24:04.651 size: 158.602051 MiB name: PDU_data_out_Pool 00:24:04.651 size: 84.521057 MiB name: bdev_io_112039 00:24:04.651 size: 51.011292 MiB name: evtpool_112039 00:24:04.651 size: 50.003479 MiB name: msgpool_112039 00:24:04.651 size: 21.763794 MiB name: PDU_Pool 00:24:04.651 size: 19.513306 MiB name: SCSI_TASK_Pool 00:24:04.651 size: 0.026123 MiB name: Session_Pool 00:24:04.651 end mempools------- 00:24:04.651 6 memzones totaling size 4.142822 MiB 00:24:04.651 size: 1.000366 MiB name: RG_ring_0_112039 00:24:04.651 size: 1.000366 MiB name: RG_ring_1_112039 00:24:04.651 size: 1.000366 MiB name: RG_ring_4_112039 00:24:04.651 size: 1.000366 MiB name: RG_ring_5_112039 00:24:04.651 size: 0.125366 MiB name: RG_ring_2_112039 00:24:04.651 size: 0.015991 MiB name: RG_ring_3_112039 00:24:04.651 end memzones------- 00:24:04.651 19:17:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:24:04.651 heap id: 0 total size: 820.000000 MiB number of busy elements: 227 number of free elements: 18 00:24:04.651 list of free elements. size: 18.469482 MiB 00:24:04.651 element at address: 0x200000400000 with size: 1.999451 MiB 00:24:04.651 element at address: 0x200000800000 with size: 1.996887 MiB 00:24:04.651 element at address: 0x200007000000 with size: 1.995972 MiB 00:24:04.651 element at address: 0x20000b200000 with size: 1.995972 MiB 00:24:04.651 element at address: 0x200019100040 with size: 0.999939 MiB 00:24:04.651 element at address: 0x200019500040 with size: 0.999939 MiB 00:24:04.651 element at address: 0x200019600000 with size: 0.999329 MiB 00:24:04.651 element at address: 0x200003e00000 with size: 0.996094 MiB 00:24:04.651 element at address: 0x200032200000 with size: 0.994324 MiB 00:24:04.651 element at address: 0x200018e00000 with size: 0.959656 MiB 00:24:04.651 element at address: 0x200019900040 with size: 0.937256 MiB 00:24:04.651 element at address: 0x200000200000 with size: 0.834106 MiB 00:24:04.651 element at address: 0x20001b000000 with size: 0.561218 MiB 00:24:04.651 element at address: 0x200019200000 with size: 0.489197 MiB 00:24:04.651 element at address: 0x200019a00000 with size: 0.485413 MiB 00:24:04.651 element at address: 0x200013800000 with size: 0.469116 MiB 00:24:04.651 element at address: 0x200028400000 with size: 0.399475 MiB 00:24:04.651 element at address: 0x200003a00000 with size: 0.356140 MiB 00:24:04.651 list of standard malloc elements. size: 199.266113 MiB 00:24:04.651 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:24:04.651 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:24:04.651 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:24:04.651 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:24:04.651 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:24:04.651 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:24:04.651 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:24:04.651 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:24:04.651 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:24:04.651 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:24:04.651 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:24:04.651 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:24:04.651 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200003aff980 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200003affa80 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200003eff000 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200013878180 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200013878280 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200013878380 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200013878480 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200013878580 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:24:04.652 element at address: 0x200019abc680 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:24:04.652 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:24:04.653 element at address: 0x200028466440 with size: 0.000244 MiB 00:24:04.653 element at address: 0x200028466540 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846d200 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846d480 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846d580 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846d680 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846d780 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846d880 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846d980 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846da80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846db80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846de80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846df80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e080 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e180 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e280 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e380 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e480 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e580 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e680 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e780 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e880 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846e980 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f080 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f180 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f280 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f380 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f480 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f580 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f680 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f780 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f880 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846f980 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:24:04.653 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:24:04.653 list of memzone associated elements. size: 602.264404 MiB 00:24:04.653 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:24:04.653 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:24:04.653 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:24:04.653 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:24:04.653 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:24:04.653 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_112039_0 00:24:04.653 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:24:04.653 associated memzone info: size: 48.002930 MiB name: MP_evtpool_112039_0 00:24:04.653 element at address: 0x200003fff340 with size: 48.003113 MiB 00:24:04.653 associated memzone info: size: 48.002930 MiB name: MP_msgpool_112039_0 00:24:04.653 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:24:04.653 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:24:04.653 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:24:04.653 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:24:04.653 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:24:04.653 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_112039 00:24:04.653 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:24:04.653 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_112039 00:24:04.653 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:24:04.653 associated memzone info: size: 1.007996 MiB name: MP_evtpool_112039 00:24:04.653 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:24:04.653 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:24:04.653 element at address: 0x200019abc780 with size: 1.008179 MiB 00:24:04.653 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:24:04.653 element at address: 0x200018efde00 with size: 1.008179 MiB 00:24:04.653 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:24:04.653 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:24:04.653 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:24:04.653 element at address: 0x200003eff100 with size: 1.000549 MiB 00:24:04.653 associated memzone info: size: 1.000366 MiB name: RG_ring_0_112039 00:24:04.653 element at address: 0x200003affb80 with size: 1.000549 MiB 00:24:04.653 associated memzone info: size: 1.000366 MiB name: RG_ring_1_112039 00:24:04.653 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:24:04.653 associated memzone info: size: 1.000366 MiB name: RG_ring_4_112039 00:24:04.653 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:24:04.653 associated memzone info: size: 1.000366 MiB name: RG_ring_5_112039 00:24:04.653 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:24:04.653 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_112039 00:24:04.653 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:24:04.653 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:24:04.653 element at address: 0x200013878680 with size: 0.500549 MiB 00:24:04.653 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:24:04.653 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:24:04.653 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:24:04.653 element at address: 0x200003adf740 with size: 0.125549 MiB 00:24:04.653 associated memzone info: size: 0.125366 MiB name: RG_ring_2_112039 00:24:04.653 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:24:04.653 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:24:04.654 element at address: 0x200028466640 with size: 0.023804 MiB 00:24:04.654 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:24:04.654 element at address: 0x200003adb500 with size: 0.016174 MiB 00:24:04.654 associated memzone info: size: 0.015991 MiB name: RG_ring_3_112039 00:24:04.654 element at address: 0x20002846c7c0 with size: 0.002502 MiB 00:24:04.654 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:24:04.654 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:24:04.654 associated memzone info: size: 0.000183 MiB name: MP_msgpool_112039 00:24:04.654 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:24:04.654 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_112039 00:24:04.654 element at address: 0x20002846d300 with size: 0.000366 MiB 00:24:04.654 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:24:04.654 19:17:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:24:04.654 19:17:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 112039 00:24:04.654 19:17:20 -- common/autotest_common.sh@936 -- # '[' -z 112039 ']' 00:24:04.654 19:17:20 -- common/autotest_common.sh@940 -- # kill -0 112039 00:24:04.654 19:17:20 -- common/autotest_common.sh@941 -- # uname 00:24:04.654 19:17:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:04.654 19:17:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112039 00:24:04.654 19:17:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:04.654 19:17:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:04.654 19:17:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112039' 00:24:04.654 killing process with pid 112039 00:24:04.654 19:17:20 -- common/autotest_common.sh@955 -- # kill 112039 00:24:04.654 19:17:20 -- common/autotest_common.sh@960 -- # wait 112039 00:24:07.937 00:24:07.937 real 0m4.659s 00:24:07.937 user 0m4.707s 00:24:07.937 sys 0m0.550s 00:24:07.937 ************************************ 00:24:07.937 END TEST dpdk_mem_utility 00:24:07.937 ************************************ 00:24:07.937 19:17:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:07.937 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:24:07.937 19:17:23 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:24:07.937 19:17:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:07.937 19:17:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:07.937 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:24:07.937 ************************************ 00:24:07.937 START TEST event 00:24:07.937 ************************************ 00:24:07.937 19:17:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:24:07.937 * Looking for test storage... 00:24:07.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:24:07.937 19:17:23 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:07.937 19:17:23 -- bdev/nbd_common.sh@6 -- # set -e 00:24:07.937 19:17:23 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:24:07.937 19:17:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:07.937 19:17:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:07.937 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:24:07.937 ************************************ 00:24:07.937 START TEST event_perf 00:24:07.937 ************************************ 00:24:07.937 19:17:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:24:07.937 Running I/O for 1 seconds...[2024-04-18 19:17:23.484204] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:07.937 [2024-04-18 19:17:23.484354] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112194 ] 00:24:07.937 [2024-04-18 19:17:23.672574] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.196 [2024-04-18 19:17:23.912454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.196 [2024-04-18 19:17:23.912700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.196 [2024-04-18 19:17:23.912635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.196 Running I/O for 1 seconds...[2024-04-18 19:17:23.912700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.571 00:24:09.571 lcore 0: 165012 00:24:09.571 lcore 1: 165012 00:24:09.571 lcore 2: 165013 00:24:09.571 lcore 3: 165015 00:24:09.571 done. 00:24:09.571 00:24:09.571 real 0m1.922s 00:24:09.571 user 0m4.693s 00:24:09.571 sys 0m0.127s 00:24:09.571 19:17:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:09.571 19:17:25 -- common/autotest_common.sh@10 -- # set +x 00:24:09.571 ************************************ 00:24:09.571 END TEST event_perf 00:24:09.571 ************************************ 00:24:09.571 19:17:25 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:24:09.571 19:17:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:24:09.571 19:17:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:09.571 19:17:25 -- common/autotest_common.sh@10 -- # set +x 00:24:09.571 ************************************ 00:24:09.571 START TEST event_reactor 00:24:09.571 ************************************ 00:24:09.571 19:17:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:24:09.829 [2024-04-18 19:17:25.505872] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:09.829 [2024-04-18 19:17:25.506461] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112252 ] 00:24:09.829 [2024-04-18 19:17:25.686780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.087 [2024-04-18 19:17:25.935852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.463 test_start 00:24:11.463 oneshot 00:24:11.463 tick 100 00:24:11.463 tick 100 00:24:11.463 tick 250 00:24:11.463 tick 100 00:24:11.463 tick 100 00:24:11.463 tick 100 00:24:11.463 tick 250 00:24:11.463 tick 500 00:24:11.463 tick 100 00:24:11.463 tick 100 00:24:11.463 tick 250 00:24:11.463 tick 100 00:24:11.463 tick 100 00:24:11.463 test_end 00:24:11.463 ************************************ 00:24:11.463 END TEST event_reactor 00:24:11.463 ************************************ 00:24:11.463 00:24:11.463 real 0m1.913s 00:24:11.463 user 0m1.682s 00:24:11.463 sys 0m0.129s 00:24:11.463 19:17:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:11.463 19:17:27 -- common/autotest_common.sh@10 -- # set +x 00:24:11.722 19:17:27 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:24:11.722 19:17:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:24:11.722 19:17:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:11.722 19:17:27 -- common/autotest_common.sh@10 -- # set +x 00:24:11.722 ************************************ 00:24:11.722 START TEST event_reactor_perf 00:24:11.722 ************************************ 00:24:11.722 19:17:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:24:11.722 [2024-04-18 19:17:27.496282] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:11.722 [2024-04-18 19:17:27.496612] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112299 ] 00:24:11.981 [2024-04-18 19:17:27.659228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.981 [2024-04-18 19:17:27.865237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.884 test_start 00:24:13.884 test_end 00:24:13.885 Performance: 371289 events per second 00:24:13.885 ************************************ 00:24:13.885 END TEST event_reactor_perf 00:24:13.885 ************************************ 00:24:13.885 00:24:13.885 real 0m1.888s 00:24:13.885 user 0m1.663s 00:24:13.885 sys 0m0.124s 00:24:13.885 19:17:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:13.885 19:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.885 19:17:29 -- event/event.sh@49 -- # uname -s 00:24:13.885 19:17:29 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:24:13.885 19:17:29 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:24:13.885 19:17:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:13.885 19:17:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:13.885 19:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.885 ************************************ 00:24:13.885 START TEST event_scheduler 00:24:13.885 ************************************ 00:24:13.885 19:17:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:24:13.885 * Looking for test storage... 00:24:13.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:24:13.885 19:17:29 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:24:13.885 19:17:29 -- scheduler/scheduler.sh@35 -- # scheduler_pid=112385 00:24:13.885 19:17:29 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:24:13.885 19:17:29 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:24:13.885 19:17:29 -- scheduler/scheduler.sh@37 -- # waitforlisten 112385 00:24:13.885 19:17:29 -- common/autotest_common.sh@817 -- # '[' -z 112385 ']' 00:24:13.885 19:17:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.885 19:17:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:13.885 19:17:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.885 19:17:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:13.885 19:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.885 [2024-04-18 19:17:29.638754] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:13.885 [2024-04-18 19:17:29.639132] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112385 ] 00:24:14.143 [2024-04-18 19:17:29.815155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.143 [2024-04-18 19:17:30.035034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.143 [2024-04-18 19:17:30.035261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.143 [2024-04-18 19:17:30.036912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.143 [2024-04-18 19:17:30.036920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.709 19:17:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:14.709 19:17:30 -- common/autotest_common.sh@850 -- # return 0 00:24:14.709 19:17:30 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:24:14.709 19:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.709 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.709 POWER: Env isn't set yet! 00:24:14.709 POWER: Attempting to initialise ACPI cpufreq power management... 00:24:14.709 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:14.709 POWER: Cannot set governor of lcore 0 to userspace 00:24:14.709 POWER: Attempting to initialise PSTAT power management... 00:24:14.709 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:14.709 POWER: Cannot set governor of lcore 0 to performance 00:24:14.709 POWER: Attempting to initialise AMD PSTATE power management... 00:24:14.709 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:14.709 POWER: Cannot set governor of lcore 0 to userspace 00:24:14.709 POWER: Attempting to initialise CPPC power management... 00:24:14.709 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:14.709 POWER: Cannot set governor of lcore 0 to userspace 00:24:14.709 POWER: Attempting to initialise VM power management... 00:24:14.709 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:24:14.709 POWER: Unable to set Power Management Environment for lcore 0 00:24:14.709 [2024-04-18 19:17:30.544911] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:24:14.709 [2024-04-18 19:17:30.545058] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:24:14.709 [2024-04-18 19:17:30.545122] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:24:14.709 19:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.709 19:17:30 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:24:14.709 19:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.709 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.967 [2024-04-18 19:17:30.890792] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:24:14.967 19:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.967 19:17:30 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:24:14.967 19:17:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:14.967 19:17:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:14.967 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 ************************************ 00:24:15.226 START TEST scheduler_create_thread 00:24:15.226 ************************************ 00:24:15.226 19:17:30 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:24:15.226 19:17:30 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:24:15.226 19:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 2 00:24:15.226 19:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:30 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:24:15.226 19:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 3 00:24:15.226 19:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:30 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:24:15.226 19:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 4 00:24:15.226 19:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:30 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:24:15.226 19:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 5 00:24:15.226 19:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:30 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:24:15.226 19:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 6 00:24:15.226 19:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:30 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:24:15.226 19:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 7 00:24:15.226 19:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:30 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:24:15.226 19:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 8 00:24:15.226 19:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:31 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:24:15.226 19:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 9 00:24:15.226 19:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:31 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:24:15.226 19:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 10 00:24:15.226 19:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:31 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:24:15.226 19:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 19:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:31 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:24:15.226 19:17:31 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:24:15.226 19:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:15.226 19:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.226 19:17:31 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:24:15.226 19:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.226 19:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.601 19:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.601 19:17:32 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:24:16.601 19:17:32 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:24:16.601 19:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.601 19:17:32 -- common/autotest_common.sh@10 -- # set +x 00:24:17.975 ************************************ 00:24:17.975 END TEST scheduler_create_thread 00:24:17.975 ************************************ 00:24:17.975 19:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.975 00:24:17.975 real 0m2.637s 00:24:17.975 user 0m0.013s 00:24:17.975 sys 0m0.006s 00:24:17.975 19:17:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:17.975 19:17:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.975 19:17:33 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:17.975 19:17:33 -- scheduler/scheduler.sh@46 -- # killprocess 112385 00:24:17.975 19:17:33 -- common/autotest_common.sh@936 -- # '[' -z 112385 ']' 00:24:17.975 19:17:33 -- common/autotest_common.sh@940 -- # kill -0 112385 00:24:17.975 19:17:33 -- common/autotest_common.sh@941 -- # uname 00:24:17.976 19:17:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.976 19:17:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112385 00:24:17.976 killing process with pid 112385 00:24:17.976 19:17:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:17.976 19:17:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:17.976 19:17:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112385' 00:24:17.976 19:17:33 -- common/autotest_common.sh@955 -- # kill 112385 00:24:17.976 19:17:33 -- common/autotest_common.sh@960 -- # wait 112385 00:24:18.234 [2024-04-18 19:17:34.056671] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:24:20.139 ************************************ 00:24:20.139 END TEST event_scheduler 00:24:20.140 ************************************ 00:24:20.140 00:24:20.140 real 0m6.108s 00:24:20.140 user 0m10.210s 00:24:20.140 sys 0m0.497s 00:24:20.140 19:17:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:20.140 19:17:35 -- common/autotest_common.sh@10 -- # set +x 00:24:20.140 19:17:35 -- event/event.sh@51 -- # modprobe -n nbd 00:24:20.140 19:17:35 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:24:20.140 19:17:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:20.140 19:17:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:20.140 19:17:35 -- common/autotest_common.sh@10 -- # set +x 00:24:20.140 ************************************ 00:24:20.140 START TEST app_repeat 00:24:20.140 ************************************ 00:24:20.140 19:17:35 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:24:20.140 19:17:35 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:20.140 19:17:35 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:24:20.140 19:17:35 -- event/event.sh@13 -- # local nbd_list 00:24:20.140 19:17:35 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:24:20.140 19:17:35 -- event/event.sh@14 -- # local bdev_list 00:24:20.140 19:17:35 -- event/event.sh@15 -- # local repeat_times=4 00:24:20.140 19:17:35 -- event/event.sh@17 -- # modprobe nbd 00:24:20.140 19:17:35 -- event/event.sh@19 -- # repeat_pid=112543 00:24:20.140 19:17:35 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:24:20.140 19:17:35 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:24:20.140 19:17:35 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112543' 00:24:20.140 Process app_repeat pid: 112543 00:24:20.140 19:17:35 -- event/event.sh@23 -- # for i in {0..2} 00:24:20.140 19:17:35 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:24:20.140 spdk_app_start Round 0 00:24:20.140 19:17:35 -- event/event.sh@25 -- # waitforlisten 112543 /var/tmp/spdk-nbd.sock 00:24:20.140 19:17:35 -- common/autotest_common.sh@817 -- # '[' -z 112543 ']' 00:24:20.140 19:17:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:20.140 19:17:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:20.140 19:17:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:20.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:20.140 19:17:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:20.140 19:17:35 -- common/autotest_common.sh@10 -- # set +x 00:24:20.140 [2024-04-18 19:17:35.725442] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:20.140 [2024-04-18 19:17:35.725871] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112543 ] 00:24:20.140 [2024-04-18 19:17:35.913871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:20.399 [2024-04-18 19:17:36.193149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.399 [2024-04-18 19:17:36.193151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.967 19:17:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:20.967 19:17:36 -- common/autotest_common.sh@850 -- # return 0 00:24:20.967 19:17:36 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:21.225 Malloc0 00:24:21.225 19:17:37 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:21.790 Malloc1 00:24:21.790 19:17:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@12 -- # local i 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:21.790 19:17:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:24:21.790 /dev/nbd0 00:24:22.048 19:17:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:22.048 19:17:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:22.048 19:17:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:22.048 19:17:37 -- common/autotest_common.sh@855 -- # local i 00:24:22.048 19:17:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:22.048 19:17:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:22.048 19:17:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:22.048 19:17:37 -- common/autotest_common.sh@859 -- # break 00:24:22.048 19:17:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:22.048 19:17:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:22.048 19:17:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:22.048 1+0 records in 00:24:22.048 1+0 records out 00:24:22.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371445 s, 11.0 MB/s 00:24:22.048 19:17:37 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:22.048 19:17:37 -- common/autotest_common.sh@872 -- # size=4096 00:24:22.048 19:17:37 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:22.048 19:17:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:22.048 19:17:37 -- common/autotest_common.sh@875 -- # return 0 00:24:22.048 19:17:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:22.048 19:17:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:22.048 19:17:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:24:22.306 /dev/nbd1 00:24:22.306 19:17:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:22.306 19:17:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:22.306 19:17:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:22.306 19:17:38 -- common/autotest_common.sh@855 -- # local i 00:24:22.306 19:17:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:22.306 19:17:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:22.306 19:17:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:22.306 19:17:38 -- common/autotest_common.sh@859 -- # break 00:24:22.306 19:17:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:22.306 19:17:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:22.306 19:17:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:22.306 1+0 records in 00:24:22.306 1+0 records out 00:24:22.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427177 s, 9.6 MB/s 00:24:22.307 19:17:38 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:22.307 19:17:38 -- common/autotest_common.sh@872 -- # size=4096 00:24:22.307 19:17:38 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:22.307 19:17:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:22.307 19:17:38 -- common/autotest_common.sh@875 -- # return 0 00:24:22.307 19:17:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:22.307 19:17:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:22.307 19:17:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:22.307 19:17:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:22.307 19:17:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:22.565 19:17:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:22.565 { 00:24:22.565 "nbd_device": "/dev/nbd0", 00:24:22.565 "bdev_name": "Malloc0" 00:24:22.565 }, 00:24:22.566 { 00:24:22.566 "nbd_device": "/dev/nbd1", 00:24:22.566 "bdev_name": "Malloc1" 00:24:22.566 } 00:24:22.566 ]' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:22.566 { 00:24:22.566 "nbd_device": "/dev/nbd0", 00:24:22.566 "bdev_name": "Malloc0" 00:24:22.566 }, 00:24:22.566 { 00:24:22.566 "nbd_device": "/dev/nbd1", 00:24:22.566 "bdev_name": "Malloc1" 00:24:22.566 } 00:24:22.566 ]' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:22.566 /dev/nbd1' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:22.566 /dev/nbd1' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@65 -- # count=2 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@95 -- # count=2 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:24:22.566 256+0 records in 00:24:22.566 256+0 records out 00:24:22.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00894955 s, 117 MB/s 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:22.566 256+0 records in 00:24:22.566 256+0 records out 00:24:22.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235124 s, 44.6 MB/s 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:22.566 256+0 records in 00:24:22.566 256+0 records out 00:24:22.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0393195 s, 26.7 MB/s 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@51 -- # local i 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:22.566 19:17:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@41 -- # break 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@45 -- # return 0 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:22.825 19:17:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@41 -- # break 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@45 -- # return 0 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@65 -- # true 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@65 -- # count=0 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@104 -- # count=0 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:23.393 19:17:39 -- bdev/nbd_common.sh@109 -- # return 0 00:24:23.393 19:17:39 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:24:23.960 19:17:39 -- event/event.sh@35 -- # sleep 3 00:24:25.335 [2024-04-18 19:17:41.243442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:25.593 [2024-04-18 19:17:41.457971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.593 [2024-04-18 19:17:41.457971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.851 [2024-04-18 19:17:41.683005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:24:25.851 [2024-04-18 19:17:41.683379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:24:27.226 spdk_app_start Round 1 00:24:27.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:27.226 19:17:42 -- event/event.sh@23 -- # for i in {0..2} 00:24:27.226 19:17:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:24:27.226 19:17:42 -- event/event.sh@25 -- # waitforlisten 112543 /var/tmp/spdk-nbd.sock 00:24:27.226 19:17:42 -- common/autotest_common.sh@817 -- # '[' -z 112543 ']' 00:24:27.226 19:17:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:27.226 19:17:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:27.226 19:17:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:27.226 19:17:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:27.226 19:17:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.226 19:17:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:27.226 19:17:43 -- common/autotest_common.sh@850 -- # return 0 00:24:27.226 19:17:43 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:27.498 Malloc0 00:24:27.498 19:17:43 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:27.761 Malloc1 00:24:27.761 19:17:43 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@12 -- # local i 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:27.761 19:17:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:24:28.327 /dev/nbd0 00:24:28.327 19:17:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:28.327 19:17:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:28.327 19:17:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:28.327 19:17:44 -- common/autotest_common.sh@855 -- # local i 00:24:28.327 19:17:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:28.327 19:17:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:28.327 19:17:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:28.327 19:17:44 -- common/autotest_common.sh@859 -- # break 00:24:28.327 19:17:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:28.327 19:17:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:28.327 19:17:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:28.327 1+0 records in 00:24:28.327 1+0 records out 00:24:28.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396254 s, 10.3 MB/s 00:24:28.327 19:17:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:28.327 19:17:44 -- common/autotest_common.sh@872 -- # size=4096 00:24:28.327 19:17:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:28.327 19:17:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:28.327 19:17:44 -- common/autotest_common.sh@875 -- # return 0 00:24:28.327 19:17:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:28.327 19:17:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:28.327 19:17:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:24:28.585 /dev/nbd1 00:24:28.585 19:17:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:28.585 19:17:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:28.585 19:17:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:28.585 19:17:44 -- common/autotest_common.sh@855 -- # local i 00:24:28.585 19:17:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:28.585 19:17:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:28.585 19:17:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:28.585 19:17:44 -- common/autotest_common.sh@859 -- # break 00:24:28.585 19:17:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:28.585 19:17:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:28.585 19:17:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:28.585 1+0 records in 00:24:28.585 1+0 records out 00:24:28.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422292 s, 9.7 MB/s 00:24:28.585 19:17:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:28.585 19:17:44 -- common/autotest_common.sh@872 -- # size=4096 00:24:28.585 19:17:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:28.585 19:17:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:28.585 19:17:44 -- common/autotest_common.sh@875 -- # return 0 00:24:28.585 19:17:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:28.585 19:17:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:28.585 19:17:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:28.586 19:17:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:28.586 19:17:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:28.844 { 00:24:28.844 "nbd_device": "/dev/nbd0", 00:24:28.844 "bdev_name": "Malloc0" 00:24:28.844 }, 00:24:28.844 { 00:24:28.844 "nbd_device": "/dev/nbd1", 00:24:28.844 "bdev_name": "Malloc1" 00:24:28.844 } 00:24:28.844 ]' 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:28.844 { 00:24:28.844 "nbd_device": "/dev/nbd0", 00:24:28.844 "bdev_name": "Malloc0" 00:24:28.844 }, 00:24:28.844 { 00:24:28.844 "nbd_device": "/dev/nbd1", 00:24:28.844 "bdev_name": "Malloc1" 00:24:28.844 } 00:24:28.844 ]' 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:28.844 /dev/nbd1' 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:28.844 /dev/nbd1' 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@65 -- # count=2 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@66 -- # echo 2 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@95 -- # count=2 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:24:28.844 256+0 records in 00:24:28.844 256+0 records out 00:24:28.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010535 s, 99.5 MB/s 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:28.844 19:17:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:29.102 256+0 records in 00:24:29.102 256+0 records out 00:24:29.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281763 s, 37.2 MB/s 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:29.103 256+0 records in 00:24:29.103 256+0 records out 00:24:29.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294772 s, 35.6 MB/s 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@51 -- # local i 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.103 19:17:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@41 -- # break 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.361 19:17:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:29.620 19:17:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@41 -- # break 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:29.621 19:17:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@65 -- # true 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@65 -- # count=0 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@104 -- # count=0 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:29.879 19:17:45 -- bdev/nbd_common.sh@109 -- # return 0 00:24:29.879 19:17:45 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:24:30.450 19:17:46 -- event/event.sh@35 -- # sleep 3 00:24:31.855 [2024-04-18 19:17:47.516919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:31.855 [2024-04-18 19:17:47.732997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.855 [2024-04-18 19:17:47.732997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.113 [2024-04-18 19:17:47.944768] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:24:32.113 [2024-04-18 19:17:47.944872] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:24:33.550 spdk_app_start Round 2 00:24:33.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:33.550 19:17:49 -- event/event.sh@23 -- # for i in {0..2} 00:24:33.550 19:17:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:24:33.550 19:17:49 -- event/event.sh@25 -- # waitforlisten 112543 /var/tmp/spdk-nbd.sock 00:24:33.550 19:17:49 -- common/autotest_common.sh@817 -- # '[' -z 112543 ']' 00:24:33.550 19:17:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:33.550 19:17:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:33.550 19:17:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:33.550 19:17:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:33.550 19:17:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.550 19:17:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:33.550 19:17:49 -- common/autotest_common.sh@850 -- # return 0 00:24:33.550 19:17:49 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:33.823 Malloc0 00:24:33.823 19:17:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:34.081 Malloc1 00:24:34.081 19:17:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@12 -- # local i 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:34.081 19:17:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:24:34.340 /dev/nbd0 00:24:34.340 19:17:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:34.340 19:17:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:34.340 19:17:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:34.340 19:17:50 -- common/autotest_common.sh@855 -- # local i 00:24:34.340 19:17:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:34.340 19:17:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:34.340 19:17:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:34.340 19:17:50 -- common/autotest_common.sh@859 -- # break 00:24:34.340 19:17:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:34.340 19:17:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:34.340 19:17:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:34.340 1+0 records in 00:24:34.340 1+0 records out 00:24:34.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301349 s, 13.6 MB/s 00:24:34.340 19:17:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:34.340 19:17:50 -- common/autotest_common.sh@872 -- # size=4096 00:24:34.340 19:17:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:34.340 19:17:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:34.340 19:17:50 -- common/autotest_common.sh@875 -- # return 0 00:24:34.340 19:17:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:34.340 19:17:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:34.340 19:17:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:24:34.599 /dev/nbd1 00:24:34.599 19:17:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:34.599 19:17:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:34.599 19:17:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:34.599 19:17:50 -- common/autotest_common.sh@855 -- # local i 00:24:34.599 19:17:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:34.599 19:17:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:34.599 19:17:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:34.599 19:17:50 -- common/autotest_common.sh@859 -- # break 00:24:34.599 19:17:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:34.599 19:17:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:34.599 19:17:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:34.599 1+0 records in 00:24:34.599 1+0 records out 00:24:34.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297514 s, 13.8 MB/s 00:24:34.599 19:17:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:34.599 19:17:50 -- common/autotest_common.sh@872 -- # size=4096 00:24:34.599 19:17:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:34.599 19:17:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:34.599 19:17:50 -- common/autotest_common.sh@875 -- # return 0 00:24:34.599 19:17:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:34.599 19:17:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:34.599 19:17:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:34.599 19:17:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:34.599 19:17:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:35.166 { 00:24:35.166 "nbd_device": "/dev/nbd0", 00:24:35.166 "bdev_name": "Malloc0" 00:24:35.166 }, 00:24:35.166 { 00:24:35.166 "nbd_device": "/dev/nbd1", 00:24:35.166 "bdev_name": "Malloc1" 00:24:35.166 } 00:24:35.166 ]' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:35.166 { 00:24:35.166 "nbd_device": "/dev/nbd0", 00:24:35.166 "bdev_name": "Malloc0" 00:24:35.166 }, 00:24:35.166 { 00:24:35.166 "nbd_device": "/dev/nbd1", 00:24:35.166 "bdev_name": "Malloc1" 00:24:35.166 } 00:24:35.166 ]' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:35.166 /dev/nbd1' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:35.166 /dev/nbd1' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@65 -- # count=2 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@95 -- # count=2 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:24:35.166 256+0 records in 00:24:35.166 256+0 records out 00:24:35.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012379 s, 84.7 MB/s 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:35.166 256+0 records in 00:24:35.166 256+0 records out 00:24:35.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224293 s, 46.8 MB/s 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:35.166 256+0 records in 00:24:35.166 256+0 records out 00:24:35.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0355621 s, 29.5 MB/s 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@51 -- # local i 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:35.166 19:17:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:35.425 19:17:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:35.425 19:17:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:35.426 19:17:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:35.426 19:17:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:35.426 19:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.426 19:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:35.426 19:17:51 -- bdev/nbd_common.sh@41 -- # break 00:24:35.426 19:17:51 -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.426 19:17:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:35.426 19:17:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:35.682 19:17:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:35.682 19:17:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:35.682 19:17:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:35.682 19:17:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:35.682 19:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.682 19:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:35.682 19:17:51 -- bdev/nbd_common.sh@41 -- # break 00:24:35.682 19:17:51 -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.683 19:17:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:35.683 19:17:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.683 19:17:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:35.940 19:17:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:35.940 19:17:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:35.940 19:17:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:36.198 19:17:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:36.198 19:17:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:36.198 19:17:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:36.198 19:17:51 -- bdev/nbd_common.sh@65 -- # true 00:24:36.198 19:17:51 -- bdev/nbd_common.sh@65 -- # count=0 00:24:36.198 19:17:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:36.198 19:17:51 -- bdev/nbd_common.sh@104 -- # count=0 00:24:36.198 19:17:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:36.198 19:17:51 -- bdev/nbd_common.sh@109 -- # return 0 00:24:36.198 19:17:51 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:24:36.456 19:17:52 -- event/event.sh@35 -- # sleep 3 00:24:38.355 [2024-04-18 19:17:53.797264] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:38.355 [2024-04-18 19:17:54.009918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.355 [2024-04-18 19:17:54.009919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.355 [2024-04-18 19:17:54.230332] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:24:38.355 [2024-04-18 19:17:54.230459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:24:39.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:39.735 19:17:55 -- event/event.sh@38 -- # waitforlisten 112543 /var/tmp/spdk-nbd.sock 00:24:39.735 19:17:55 -- common/autotest_common.sh@817 -- # '[' -z 112543 ']' 00:24:39.735 19:17:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:39.735 19:17:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:39.735 19:17:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:39.735 19:17:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:39.735 19:17:55 -- common/autotest_common.sh@10 -- # set +x 00:24:39.735 19:17:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:39.735 19:17:55 -- common/autotest_common.sh@850 -- # return 0 00:24:39.735 19:17:55 -- event/event.sh@39 -- # killprocess 112543 00:24:39.735 19:17:55 -- common/autotest_common.sh@936 -- # '[' -z 112543 ']' 00:24:39.735 19:17:55 -- common/autotest_common.sh@940 -- # kill -0 112543 00:24:39.735 19:17:55 -- common/autotest_common.sh@941 -- # uname 00:24:39.735 19:17:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:39.735 19:17:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112543 00:24:39.735 killing process with pid 112543 00:24:39.735 19:17:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:39.735 19:17:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:39.735 19:17:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112543' 00:24:39.735 19:17:55 -- common/autotest_common.sh@955 -- # kill 112543 00:24:39.735 19:17:55 -- common/autotest_common.sh@960 -- # wait 112543 00:24:41.111 spdk_app_start is called in Round 0. 00:24:41.111 Shutdown signal received, stop current app iteration 00:24:41.111 Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 reinitialization... 00:24:41.111 spdk_app_start is called in Round 1. 00:24:41.111 Shutdown signal received, stop current app iteration 00:24:41.111 Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 reinitialization... 00:24:41.111 spdk_app_start is called in Round 2. 00:24:41.111 Shutdown signal received, stop current app iteration 00:24:41.111 Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 reinitialization... 00:24:41.111 spdk_app_start is called in Round 3. 00:24:41.111 Shutdown signal received, stop current app iteration 00:24:41.111 19:17:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:24:41.111 19:17:56 -- event/event.sh@42 -- # return 0 00:24:41.111 ************************************ 00:24:41.111 END TEST app_repeat 00:24:41.111 ************************************ 00:24:41.111 00:24:41.111 real 0m21.293s 00:24:41.111 user 0m45.307s 00:24:41.111 sys 0m3.172s 00:24:41.111 19:17:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:41.111 19:17:56 -- common/autotest_common.sh@10 -- # set +x 00:24:41.111 19:17:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:24:41.111 19:17:56 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:24:41.111 19:17:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:41.111 19:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:41.111 19:17:56 -- common/autotest_common.sh@10 -- # set +x 00:24:41.371 ************************************ 00:24:41.371 START TEST cpu_locks 00:24:41.371 ************************************ 00:24:41.371 19:17:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:24:41.371 * Looking for test storage... 00:24:41.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:24:41.371 19:17:57 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:24:41.371 19:17:57 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:24:41.371 19:17:57 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:24:41.371 19:17:57 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:24:41.371 19:17:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:41.371 19:17:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:41.371 19:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.371 ************************************ 00:24:41.371 START TEST default_locks 00:24:41.371 ************************************ 00:24:41.371 19:17:57 -- common/autotest_common.sh@1111 -- # default_locks 00:24:41.371 19:17:57 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=113128 00:24:41.371 19:17:57 -- event/cpu_locks.sh@47 -- # waitforlisten 113128 00:24:41.371 19:17:57 -- common/autotest_common.sh@817 -- # '[' -z 113128 ']' 00:24:41.371 19:17:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.371 19:17:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:41.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.371 19:17:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.371 19:17:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:41.371 19:17:57 -- common/autotest_common.sh@10 -- # set +x 00:24:41.371 19:17:57 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:41.371 [2024-04-18 19:17:57.269833] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:41.371 [2024-04-18 19:17:57.270238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113128 ] 00:24:41.630 [2024-04-18 19:17:57.448599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.888 [2024-04-18 19:17:57.741375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.881 19:17:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:42.881 19:17:58 -- common/autotest_common.sh@850 -- # return 0 00:24:42.881 19:17:58 -- event/cpu_locks.sh@49 -- # locks_exist 113128 00:24:42.881 19:17:58 -- event/cpu_locks.sh@22 -- # lslocks -p 113128 00:24:42.881 19:17:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:43.453 19:17:59 -- event/cpu_locks.sh@50 -- # killprocess 113128 00:24:43.453 19:17:59 -- common/autotest_common.sh@936 -- # '[' -z 113128 ']' 00:24:43.453 19:17:59 -- common/autotest_common.sh@940 -- # kill -0 113128 00:24:43.453 19:17:59 -- common/autotest_common.sh@941 -- # uname 00:24:43.453 19:17:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:43.453 19:17:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113128 00:24:43.453 19:17:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:43.453 19:17:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:43.453 19:17:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113128' 00:24:43.453 killing process with pid 113128 00:24:43.453 19:17:59 -- common/autotest_common.sh@955 -- # kill 113128 00:24:43.453 19:17:59 -- common/autotest_common.sh@960 -- # wait 113128 00:24:45.981 19:18:01 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 113128 00:24:45.981 19:18:01 -- common/autotest_common.sh@638 -- # local es=0 00:24:45.981 19:18:01 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113128 00:24:45.981 19:18:01 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:24:45.981 19:18:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.981 19:18:01 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:24:45.981 19:18:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.981 19:18:01 -- common/autotest_common.sh@641 -- # waitforlisten 113128 00:24:45.981 19:18:01 -- common/autotest_common.sh@817 -- # '[' -z 113128 ']' 00:24:45.981 19:18:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.981 19:18:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:45.981 19:18:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.981 19:18:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:45.981 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:24:45.981 ERROR: process (pid: 113128) is no longer running 00:24:45.981 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113128) - No such process 00:24:45.981 19:18:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:45.981 19:18:01 -- common/autotest_common.sh@850 -- # return 1 00:24:45.981 19:18:01 -- common/autotest_common.sh@641 -- # es=1 00:24:45.981 19:18:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:45.981 19:18:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:45.981 19:18:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:45.981 19:18:01 -- event/cpu_locks.sh@54 -- # no_locks 00:24:45.981 19:18:01 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:24:45.981 19:18:01 -- event/cpu_locks.sh@26 -- # local lock_files 00:24:45.981 ************************************ 00:24:45.981 END TEST default_locks 00:24:45.981 ************************************ 00:24:45.981 19:18:01 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:24:45.981 00:24:45.981 real 0m4.553s 00:24:45.981 user 0m4.668s 00:24:45.981 sys 0m0.647s 00:24:45.981 19:18:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:45.981 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:24:45.981 19:18:01 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:24:45.981 19:18:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:45.981 19:18:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:45.981 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:24:45.981 ************************************ 00:24:45.981 START TEST default_locks_via_rpc 00:24:45.981 ************************************ 00:24:45.981 19:18:01 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:24:45.981 19:18:01 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=113233 00:24:45.981 19:18:01 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:45.981 19:18:01 -- event/cpu_locks.sh@63 -- # waitforlisten 113233 00:24:45.981 19:18:01 -- common/autotest_common.sh@817 -- # '[' -z 113233 ']' 00:24:45.981 19:18:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.981 19:18:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:45.981 19:18:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.981 19:18:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:45.981 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:24:45.981 [2024-04-18 19:18:01.903210] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:45.981 [2024-04-18 19:18:01.903392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113233 ] 00:24:46.239 [2024-04-18 19:18:02.069670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.497 [2024-04-18 19:18:02.356109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.436 19:18:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:47.436 19:18:03 -- common/autotest_common.sh@850 -- # return 0 00:24:47.436 19:18:03 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:24:47.436 19:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.436 19:18:03 -- common/autotest_common.sh@10 -- # set +x 00:24:47.436 19:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.436 19:18:03 -- event/cpu_locks.sh@67 -- # no_locks 00:24:47.436 19:18:03 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:24:47.436 19:18:03 -- event/cpu_locks.sh@26 -- # local lock_files 00:24:47.436 19:18:03 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:24:47.436 19:18:03 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:24:47.436 19:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.436 19:18:03 -- common/autotest_common.sh@10 -- # set +x 00:24:47.436 19:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.436 19:18:03 -- event/cpu_locks.sh@71 -- # locks_exist 113233 00:24:47.436 19:18:03 -- event/cpu_locks.sh@22 -- # lslocks -p 113233 00:24:47.436 19:18:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:48.002 19:18:03 -- event/cpu_locks.sh@73 -- # killprocess 113233 00:24:48.002 19:18:03 -- common/autotest_common.sh@936 -- # '[' -z 113233 ']' 00:24:48.002 19:18:03 -- common/autotest_common.sh@940 -- # kill -0 113233 00:24:48.002 19:18:03 -- common/autotest_common.sh@941 -- # uname 00:24:48.002 19:18:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.002 19:18:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113233 00:24:48.002 19:18:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:48.002 19:18:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:48.002 19:18:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113233' 00:24:48.002 killing process with pid 113233 00:24:48.002 19:18:03 -- common/autotest_common.sh@955 -- # kill 113233 00:24:48.002 19:18:03 -- common/autotest_common.sh@960 -- # wait 113233 00:24:50.532 00:24:50.532 real 0m4.481s 00:24:50.532 user 0m4.604s 00:24:50.532 sys 0m0.646s 00:24:50.532 19:18:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:50.532 ************************************ 00:24:50.532 END TEST default_locks_via_rpc 00:24:50.532 ************************************ 00:24:50.532 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:24:50.532 19:18:06 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:24:50.532 19:18:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:50.532 19:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:50.532 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:24:50.532 ************************************ 00:24:50.532 START TEST non_locking_app_on_locked_coremask 00:24:50.532 ************************************ 00:24:50.532 19:18:06 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:24:50.532 19:18:06 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=113334 00:24:50.532 19:18:06 -- event/cpu_locks.sh@81 -- # waitforlisten 113334 /var/tmp/spdk.sock 00:24:50.532 19:18:06 -- common/autotest_common.sh@817 -- # '[' -z 113334 ']' 00:24:50.532 19:18:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.532 19:18:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:50.532 19:18:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.532 19:18:06 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:50.532 19:18:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:50.532 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:24:50.791 [2024-04-18 19:18:06.485861] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:50.791 [2024-04-18 19:18:06.486053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113334 ] 00:24:50.791 [2024-04-18 19:18:06.656829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.050 [2024-04-18 19:18:06.902219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.986 19:18:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:51.986 19:18:07 -- common/autotest_common.sh@850 -- # return 0 00:24:51.986 19:18:07 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=113354 00:24:51.986 19:18:07 -- event/cpu_locks.sh@85 -- # waitforlisten 113354 /var/tmp/spdk2.sock 00:24:51.986 19:18:07 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:24:51.986 19:18:07 -- common/autotest_common.sh@817 -- # '[' -z 113354 ']' 00:24:51.986 19:18:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:51.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:51.986 19:18:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:51.986 19:18:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:51.986 19:18:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:51.986 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.279 [2024-04-18 19:18:07.978757] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:24:52.279 [2024-04-18 19:18:07.979643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113354 ] 00:24:52.279 [2024-04-18 19:18:08.156775] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:52.279 [2024-04-18 19:18:08.156854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.846 [2024-04-18 19:18:08.622139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.378 19:18:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:55.378 19:18:10 -- common/autotest_common.sh@850 -- # return 0 00:24:55.378 19:18:10 -- event/cpu_locks.sh@87 -- # locks_exist 113334 00:24:55.378 19:18:10 -- event/cpu_locks.sh@22 -- # lslocks -p 113334 00:24:55.378 19:18:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:55.636 19:18:11 -- event/cpu_locks.sh@89 -- # killprocess 113334 00:24:55.636 19:18:11 -- common/autotest_common.sh@936 -- # '[' -z 113334 ']' 00:24:55.636 19:18:11 -- common/autotest_common.sh@940 -- # kill -0 113334 00:24:55.636 19:18:11 -- common/autotest_common.sh@941 -- # uname 00:24:55.636 19:18:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.636 19:18:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113334 00:24:55.636 19:18:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:55.636 19:18:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:55.636 19:18:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113334' 00:24:55.636 killing process with pid 113334 00:24:55.636 19:18:11 -- common/autotest_common.sh@955 -- # kill 113334 00:24:55.636 19:18:11 -- common/autotest_common.sh@960 -- # wait 113334 00:25:00.903 19:18:16 -- event/cpu_locks.sh@90 -- # killprocess 113354 00:25:00.903 19:18:16 -- common/autotest_common.sh@936 -- # '[' -z 113354 ']' 00:25:00.903 19:18:16 -- common/autotest_common.sh@940 -- # kill -0 113354 00:25:00.903 19:18:16 -- common/autotest_common.sh@941 -- # uname 00:25:00.903 19:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.903 19:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113354 00:25:00.903 19:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:00.903 killing process with pid 113354 00:25:00.903 19:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:00.903 19:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113354' 00:25:00.903 19:18:16 -- common/autotest_common.sh@955 -- # kill 113354 00:25:00.903 19:18:16 -- common/autotest_common.sh@960 -- # wait 113354 00:25:04.186 00:25:04.186 real 0m13.220s 00:25:04.186 user 0m13.682s 00:25:04.186 sys 0m1.300s 00:25:04.186 ************************************ 00:25:04.186 END TEST non_locking_app_on_locked_coremask 00:25:04.186 ************************************ 00:25:04.186 19:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:04.186 19:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.186 19:18:19 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:25:04.186 19:18:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:04.186 19:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:04.186 19:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.186 ************************************ 00:25:04.186 START TEST locking_app_on_unlocked_coremask 00:25:04.186 ************************************ 00:25:04.186 19:18:19 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:25:04.186 19:18:19 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=113561 00:25:04.186 19:18:19 -- event/cpu_locks.sh@99 -- # waitforlisten 113561 /var/tmp/spdk.sock 00:25:04.186 19:18:19 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:25:04.186 19:18:19 -- common/autotest_common.sh@817 -- # '[' -z 113561 ']' 00:25:04.186 19:18:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.186 19:18:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:04.186 19:18:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.186 19:18:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:04.186 19:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.186 [2024-04-18 19:18:19.782236] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:04.186 [2024-04-18 19:18:19.782580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113561 ] 00:25:04.186 [2024-04-18 19:18:19.949814] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:04.186 [2024-04-18 19:18:19.949916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.444 [2024-04-18 19:18:20.225283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.447 19:18:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:05.447 19:18:21 -- common/autotest_common.sh@850 -- # return 0 00:25:05.447 19:18:21 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=113581 00:25:05.447 19:18:21 -- event/cpu_locks.sh@103 -- # waitforlisten 113581 /var/tmp/spdk2.sock 00:25:05.447 19:18:21 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:25:05.447 19:18:21 -- common/autotest_common.sh@817 -- # '[' -z 113581 ']' 00:25:05.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:05.447 19:18:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:05.447 19:18:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:05.447 19:18:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:05.447 19:18:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:05.447 19:18:21 -- common/autotest_common.sh@10 -- # set +x 00:25:05.447 [2024-04-18 19:18:21.352023] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:05.447 [2024-04-18 19:18:21.352175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113581 ] 00:25:05.704 [2024-04-18 19:18:21.513240] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.267 [2024-04-18 19:18:22.017952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.164 19:18:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:08.164 19:18:24 -- common/autotest_common.sh@850 -- # return 0 00:25:08.164 19:18:24 -- event/cpu_locks.sh@105 -- # locks_exist 113581 00:25:08.164 19:18:24 -- event/cpu_locks.sh@22 -- # lslocks -p 113581 00:25:08.164 19:18:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:25:08.738 19:18:24 -- event/cpu_locks.sh@107 -- # killprocess 113561 00:25:08.738 19:18:24 -- common/autotest_common.sh@936 -- # '[' -z 113561 ']' 00:25:08.738 19:18:24 -- common/autotest_common.sh@940 -- # kill -0 113561 00:25:08.738 19:18:24 -- common/autotest_common.sh@941 -- # uname 00:25:08.738 19:18:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:08.738 19:18:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113561 00:25:08.738 19:18:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:08.738 killing process with pid 113561 00:25:08.738 19:18:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:08.738 19:18:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113561' 00:25:08.738 19:18:24 -- common/autotest_common.sh@955 -- # kill 113561 00:25:08.738 19:18:24 -- common/autotest_common.sh@960 -- # wait 113561 00:25:15.296 19:18:29 -- event/cpu_locks.sh@108 -- # killprocess 113581 00:25:15.296 19:18:29 -- common/autotest_common.sh@936 -- # '[' -z 113581 ']' 00:25:15.296 19:18:29 -- common/autotest_common.sh@940 -- # kill -0 113581 00:25:15.296 19:18:29 -- common/autotest_common.sh@941 -- # uname 00:25:15.296 19:18:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:15.296 19:18:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113581 00:25:15.296 19:18:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:15.296 killing process with pid 113581 00:25:15.296 19:18:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:15.296 19:18:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113581' 00:25:15.296 19:18:29 -- common/autotest_common.sh@955 -- # kill 113581 00:25:15.296 19:18:29 -- common/autotest_common.sh@960 -- # wait 113581 00:25:17.293 00:25:17.293 real 0m13.307s 00:25:17.293 user 0m13.902s 00:25:17.293 sys 0m1.316s 00:25:17.293 19:18:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:17.293 ************************************ 00:25:17.293 END TEST locking_app_on_unlocked_coremask 00:25:17.293 ************************************ 00:25:17.293 19:18:33 -- common/autotest_common.sh@10 -- # set +x 00:25:17.293 19:18:33 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:25:17.293 19:18:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:17.293 19:18:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:17.293 19:18:33 -- common/autotest_common.sh@10 -- # set +x 00:25:17.293 ************************************ 00:25:17.293 START TEST locking_app_on_locked_coremask 00:25:17.293 ************************************ 00:25:17.293 19:18:33 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:25:17.293 19:18:33 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=113805 00:25:17.293 19:18:33 -- event/cpu_locks.sh@116 -- # waitforlisten 113805 /var/tmp/spdk.sock 00:25:17.293 19:18:33 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:17.293 19:18:33 -- common/autotest_common.sh@817 -- # '[' -z 113805 ']' 00:25:17.293 19:18:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.293 19:18:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:17.293 19:18:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.293 19:18:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:17.293 19:18:33 -- common/autotest_common.sh@10 -- # set +x 00:25:17.293 [2024-04-18 19:18:33.193229] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:17.293 [2024-04-18 19:18:33.193538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113805 ] 00:25:17.569 [2024-04-18 19:18:33.362646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.832 [2024-04-18 19:18:33.667787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.206 19:18:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:19.206 19:18:34 -- common/autotest_common.sh@850 -- # return 0 00:25:19.206 19:18:34 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:25:19.206 19:18:34 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=113832 00:25:19.206 19:18:34 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 113832 /var/tmp/spdk2.sock 00:25:19.206 19:18:34 -- common/autotest_common.sh@638 -- # local es=0 00:25:19.206 19:18:34 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113832 /var/tmp/spdk2.sock 00:25:19.206 19:18:34 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:25:19.206 19:18:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:19.206 19:18:34 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:25:19.206 19:18:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:19.206 19:18:34 -- common/autotest_common.sh@641 -- # waitforlisten 113832 /var/tmp/spdk2.sock 00:25:19.206 19:18:34 -- common/autotest_common.sh@817 -- # '[' -z 113832 ']' 00:25:19.206 19:18:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:19.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:19.206 19:18:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:19.206 19:18:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:19.206 19:18:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:19.206 19:18:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.206 [2024-04-18 19:18:35.054030] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:19.206 [2024-04-18 19:18:35.054183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113832 ] 00:25:19.464 [2024-04-18 19:18:35.222173] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 113805 has claimed it. 00:25:19.464 [2024-04-18 19:18:35.222301] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:25:20.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113832) - No such process 00:25:20.058 ERROR: process (pid: 113832) is no longer running 00:25:20.058 19:18:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:20.058 19:18:35 -- common/autotest_common.sh@850 -- # return 1 00:25:20.058 19:18:35 -- common/autotest_common.sh@641 -- # es=1 00:25:20.058 19:18:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:20.058 19:18:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:20.058 19:18:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:20.058 19:18:35 -- event/cpu_locks.sh@122 -- # locks_exist 113805 00:25:20.058 19:18:35 -- event/cpu_locks.sh@22 -- # lslocks -p 113805 00:25:20.058 19:18:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:25:20.342 19:18:36 -- event/cpu_locks.sh@124 -- # killprocess 113805 00:25:20.342 19:18:36 -- common/autotest_common.sh@936 -- # '[' -z 113805 ']' 00:25:20.342 19:18:36 -- common/autotest_common.sh@940 -- # kill -0 113805 00:25:20.342 19:18:36 -- common/autotest_common.sh@941 -- # uname 00:25:20.342 19:18:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:20.342 19:18:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113805 00:25:20.342 19:18:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:20.342 killing process with pid 113805 00:25:20.342 19:18:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:20.342 19:18:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113805' 00:25:20.342 19:18:36 -- common/autotest_common.sh@955 -- # kill 113805 00:25:20.343 19:18:36 -- common/autotest_common.sh@960 -- # wait 113805 00:25:23.626 00:25:23.626 real 0m5.759s 00:25:23.626 user 0m5.918s 00:25:23.626 sys 0m0.894s 00:25:23.626 19:18:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:23.626 19:18:38 -- common/autotest_common.sh@10 -- # set +x 00:25:23.626 ************************************ 00:25:23.626 END TEST locking_app_on_locked_coremask 00:25:23.626 ************************************ 00:25:23.626 19:18:38 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:25:23.626 19:18:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:23.626 19:18:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:23.626 19:18:38 -- common/autotest_common.sh@10 -- # set +x 00:25:23.626 ************************************ 00:25:23.626 START TEST locking_overlapped_coremask 00:25:23.626 ************************************ 00:25:23.626 19:18:38 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:25:23.626 19:18:38 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=113917 00:25:23.626 19:18:38 -- event/cpu_locks.sh@133 -- # waitforlisten 113917 /var/tmp/spdk.sock 00:25:23.626 19:18:38 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:23.626 19:18:38 -- common/autotest_common.sh@817 -- # '[' -z 113917 ']' 00:25:23.626 19:18:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.626 19:18:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:23.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.626 19:18:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.626 19:18:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:23.626 19:18:38 -- common/autotest_common.sh@10 -- # set +x 00:25:23.626 [2024-04-18 19:18:39.056377] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:23.626 [2024-04-18 19:18:39.056582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113917 ] 00:25:23.626 [2024-04-18 19:18:39.248065] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:23.626 [2024-04-18 19:18:39.510048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.626 [2024-04-18 19:18:39.510235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.626 [2024-04-18 19:18:39.510243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.999 19:18:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:24.999 19:18:40 -- common/autotest_common.sh@850 -- # return 0 00:25:24.999 19:18:40 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=113940 00:25:24.999 19:18:40 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 113940 /var/tmp/spdk2.sock 00:25:24.999 19:18:40 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:25:24.999 19:18:40 -- common/autotest_common.sh@638 -- # local es=0 00:25:24.999 19:18:40 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113940 /var/tmp/spdk2.sock 00:25:24.999 19:18:40 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:25:24.999 19:18:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:24.999 19:18:40 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:25:24.999 19:18:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:24.999 19:18:40 -- common/autotest_common.sh@641 -- # waitforlisten 113940 /var/tmp/spdk2.sock 00:25:24.999 19:18:40 -- common/autotest_common.sh@817 -- # '[' -z 113940 ']' 00:25:24.999 19:18:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:24.999 19:18:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:24.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:24.999 19:18:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:25.000 19:18:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.000 19:18:40 -- common/autotest_common.sh@10 -- # set +x 00:25:25.000 [2024-04-18 19:18:40.788728] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:25.000 [2024-04-18 19:18:40.788924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113940 ] 00:25:25.303 [2024-04-18 19:18:41.004356] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113917 has claimed it. 00:25:25.303 [2024-04-18 19:18:41.004660] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:25:25.564 ERROR: process (pid: 113940) is no longer running 00:25:25.564 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113940) - No such process 00:25:25.564 19:18:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:25.564 19:18:41 -- common/autotest_common.sh@850 -- # return 1 00:25:25.564 19:18:41 -- common/autotest_common.sh@641 -- # es=1 00:25:25.564 19:18:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:25.564 19:18:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:25.564 19:18:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:25.564 19:18:41 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:25:25.564 19:18:41 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:25:25.564 19:18:41 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:25:25.564 19:18:41 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:25:25.564 19:18:41 -- event/cpu_locks.sh@141 -- # killprocess 113917 00:25:25.564 19:18:41 -- common/autotest_common.sh@936 -- # '[' -z 113917 ']' 00:25:25.564 19:18:41 -- common/autotest_common.sh@940 -- # kill -0 113917 00:25:25.564 19:18:41 -- common/autotest_common.sh@941 -- # uname 00:25:25.564 19:18:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:25.564 19:18:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113917 00:25:25.564 19:18:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:25.564 killing process with pid 113917 00:25:25.564 19:18:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:25.564 19:18:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113917' 00:25:25.564 19:18:41 -- common/autotest_common.sh@955 -- # kill 113917 00:25:25.564 19:18:41 -- common/autotest_common.sh@960 -- # wait 113917 00:25:28.865 00:25:28.865 real 0m5.259s 00:25:28.865 user 0m13.810s 00:25:28.865 sys 0m0.701s 00:25:28.865 19:18:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:28.865 ************************************ 00:25:28.865 19:18:44 -- common/autotest_common.sh@10 -- # set +x 00:25:28.865 END TEST locking_overlapped_coremask 00:25:28.865 ************************************ 00:25:28.865 19:18:44 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:25:28.865 19:18:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:28.865 19:18:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.865 19:18:44 -- common/autotest_common.sh@10 -- # set +x 00:25:28.865 ************************************ 00:25:28.865 START TEST locking_overlapped_coremask_via_rpc 00:25:28.865 ************************************ 00:25:28.865 19:18:44 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:25:28.865 19:18:44 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=114043 00:25:28.865 19:18:44 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:25:28.865 19:18:44 -- event/cpu_locks.sh@149 -- # waitforlisten 114043 /var/tmp/spdk.sock 00:25:28.865 19:18:44 -- common/autotest_common.sh@817 -- # '[' -z 114043 ']' 00:25:28.865 19:18:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.865 19:18:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:28.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.865 19:18:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.865 19:18:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:28.865 19:18:44 -- common/autotest_common.sh@10 -- # set +x 00:25:28.865 [2024-04-18 19:18:44.411768] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:28.865 [2024-04-18 19:18:44.411976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114043 ] 00:25:28.865 [2024-04-18 19:18:44.605305] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:28.865 [2024-04-18 19:18:44.605403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:29.123 [2024-04-18 19:18:44.887406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.124 [2024-04-18 19:18:44.887517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.124 [2024-04-18 19:18:44.887517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.059 19:18:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:30.059 19:18:45 -- common/autotest_common.sh@850 -- # return 0 00:25:30.059 19:18:45 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=114065 00:25:30.059 19:18:45 -- event/cpu_locks.sh@153 -- # waitforlisten 114065 /var/tmp/spdk2.sock 00:25:30.059 19:18:45 -- common/autotest_common.sh@817 -- # '[' -z 114065 ']' 00:25:30.059 19:18:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:30.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:30.059 19:18:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:30.059 19:18:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:30.059 19:18:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:30.059 19:18:45 -- common/autotest_common.sh@10 -- # set +x 00:25:30.059 19:18:45 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:25:30.317 [2024-04-18 19:18:46.055538] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:30.317 [2024-04-18 19:18:46.056156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114065 ] 00:25:30.317 [2024-04-18 19:18:46.238255] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:30.317 [2024-04-18 19:18:46.238343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:30.934 [2024-04-18 19:18:46.723188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.934 [2024-04-18 19:18:46.735478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.934 [2024-04-18 19:18:46.735485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:33.117 19:18:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:33.117 19:18:48 -- common/autotest_common.sh@850 -- # return 0 00:25:33.117 19:18:48 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:25:33.117 19:18:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.117 19:18:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.117 19:18:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.117 19:18:48 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:25:33.117 19:18:48 -- common/autotest_common.sh@638 -- # local es=0 00:25:33.117 19:18:48 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:25:33.117 19:18:48 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:33.117 19:18:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:33.117 19:18:48 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:33.117 19:18:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:33.117 19:18:48 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:25:33.117 19:18:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.117 19:18:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.117 [2024-04-18 19:18:48.777091] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114043 has claimed it. 00:25:33.117 request: 00:25:33.117 { 00:25:33.117 "method": "framework_enable_cpumask_locks", 00:25:33.117 "req_id": 1 00:25:33.117 } 00:25:33.117 Got JSON-RPC error response 00:25:33.117 response: 00:25:33.117 { 00:25:33.117 "code": -32603, 00:25:33.117 "message": "Failed to claim CPU core: 2" 00:25:33.117 } 00:25:33.117 19:18:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:33.117 19:18:48 -- common/autotest_common.sh@641 -- # es=1 00:25:33.117 19:18:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:33.117 19:18:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:33.117 19:18:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:33.117 19:18:48 -- event/cpu_locks.sh@158 -- # waitforlisten 114043 /var/tmp/spdk.sock 00:25:33.117 19:18:48 -- common/autotest_common.sh@817 -- # '[' -z 114043 ']' 00:25:33.117 19:18:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.117 19:18:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:33.117 19:18:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.117 19:18:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:33.117 19:18:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.117 19:18:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:33.117 19:18:49 -- common/autotest_common.sh@850 -- # return 0 00:25:33.117 19:18:49 -- event/cpu_locks.sh@159 -- # waitforlisten 114065 /var/tmp/spdk2.sock 00:25:33.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:33.117 19:18:49 -- common/autotest_common.sh@817 -- # '[' -z 114065 ']' 00:25:33.117 19:18:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:33.117 19:18:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:33.117 19:18:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:33.117 19:18:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:33.117 19:18:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.375 19:18:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:33.375 19:18:49 -- common/autotest_common.sh@850 -- # return 0 00:25:33.375 19:18:49 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:25:33.375 19:18:49 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:25:33.375 ************************************ 00:25:33.375 END TEST locking_overlapped_coremask_via_rpc 00:25:33.375 ************************************ 00:25:33.375 19:18:49 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:25:33.375 19:18:49 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:25:33.375 00:25:33.375 real 0m4.910s 00:25:33.375 user 0m1.419s 00:25:33.375 sys 0m0.231s 00:25:33.375 19:18:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:33.375 19:18:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.375 19:18:49 -- event/cpu_locks.sh@174 -- # cleanup 00:25:33.375 19:18:49 -- event/cpu_locks.sh@15 -- # [[ -z 114043 ]] 00:25:33.375 19:18:49 -- event/cpu_locks.sh@15 -- # killprocess 114043 00:25:33.375 19:18:49 -- common/autotest_common.sh@936 -- # '[' -z 114043 ']' 00:25:33.375 19:18:49 -- common/autotest_common.sh@940 -- # kill -0 114043 00:25:33.375 19:18:49 -- common/autotest_common.sh@941 -- # uname 00:25:33.375 19:18:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:33.375 19:18:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114043 00:25:33.375 19:18:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:33.375 killing process with pid 114043 00:25:33.375 19:18:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:33.375 19:18:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114043' 00:25:33.375 19:18:49 -- common/autotest_common.sh@955 -- # kill 114043 00:25:33.375 19:18:49 -- common/autotest_common.sh@960 -- # wait 114043 00:25:36.659 19:18:52 -- event/cpu_locks.sh@16 -- # [[ -z 114065 ]] 00:25:36.659 19:18:52 -- event/cpu_locks.sh@16 -- # killprocess 114065 00:25:36.659 19:18:52 -- common/autotest_common.sh@936 -- # '[' -z 114065 ']' 00:25:36.659 19:18:52 -- common/autotest_common.sh@940 -- # kill -0 114065 00:25:36.659 19:18:52 -- common/autotest_common.sh@941 -- # uname 00:25:36.659 19:18:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:36.659 19:18:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114065 00:25:36.659 19:18:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:36.660 killing process with pid 114065 00:25:36.660 19:18:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:36.660 19:18:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114065' 00:25:36.660 19:18:52 -- common/autotest_common.sh@955 -- # kill 114065 00:25:36.660 19:18:52 -- common/autotest_common.sh@960 -- # wait 114065 00:25:39.270 19:18:54 -- event/cpu_locks.sh@18 -- # rm -f 00:25:39.270 19:18:54 -- event/cpu_locks.sh@1 -- # cleanup 00:25:39.270 19:18:54 -- event/cpu_locks.sh@15 -- # [[ -z 114043 ]] 00:25:39.270 19:18:54 -- event/cpu_locks.sh@15 -- # killprocess 114043 00:25:39.270 19:18:54 -- common/autotest_common.sh@936 -- # '[' -z 114043 ']' 00:25:39.270 19:18:54 -- common/autotest_common.sh@940 -- # kill -0 114043 00:25:39.270 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (114043) - No such process 00:25:39.270 Process with pid 114043 is not found 00:25:39.270 19:18:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 114043 is not found' 00:25:39.270 19:18:54 -- event/cpu_locks.sh@16 -- # [[ -z 114065 ]] 00:25:39.270 Process with pid 114065 is not found 00:25:39.270 19:18:54 -- event/cpu_locks.sh@16 -- # killprocess 114065 00:25:39.270 19:18:54 -- common/autotest_common.sh@936 -- # '[' -z 114065 ']' 00:25:39.270 19:18:54 -- common/autotest_common.sh@940 -- # kill -0 114065 00:25:39.270 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (114065) - No such process 00:25:39.270 19:18:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 114065 is not found' 00:25:39.270 19:18:54 -- event/cpu_locks.sh@18 -- # rm -f 00:25:39.270 00:25:39.270 real 0m57.793s 00:25:39.270 user 1m37.300s 00:25:39.270 sys 0m6.831s 00:25:39.270 19:18:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:39.270 19:18:54 -- common/autotest_common.sh@10 -- # set +x 00:25:39.270 ************************************ 00:25:39.270 END TEST cpu_locks 00:25:39.270 ************************************ 00:25:39.270 00:25:39.270 real 1m31.595s 00:25:39.270 user 2m41.182s 00:25:39.270 sys 0m11.217s 00:25:39.270 19:18:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:39.270 19:18:54 -- common/autotest_common.sh@10 -- # set +x 00:25:39.270 ************************************ 00:25:39.270 END TEST event 00:25:39.270 ************************************ 00:25:39.270 19:18:54 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:25:39.270 19:18:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:39.270 19:18:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:39.270 19:18:54 -- common/autotest_common.sh@10 -- # set +x 00:25:39.270 ************************************ 00:25:39.270 START TEST thread 00:25:39.270 ************************************ 00:25:39.270 19:18:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:25:39.270 * Looking for test storage... 00:25:39.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:25:39.270 19:18:55 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:25:39.270 19:18:55 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:25:39.270 19:18:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:39.270 19:18:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.270 ************************************ 00:25:39.270 START TEST thread_poller_perf 00:25:39.270 ************************************ 00:25:39.270 19:18:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:25:39.270 [2024-04-18 19:18:55.167406] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:39.270 [2024-04-18 19:18:55.167601] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114304 ] 00:25:39.529 [2024-04-18 19:18:55.353780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.787 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:25:39.787 [2024-04-18 19:18:55.582370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.161 ====================================== 00:25:41.161 busy:2107518652 (cyc) 00:25:41.161 total_run_count: 346000 00:25:41.161 tsc_hz: 2100000000 (cyc) 00:25:41.161 ====================================== 00:25:41.161 poller_cost: 6091 (cyc), 2900 (nsec) 00:25:41.419 00:25:41.419 real 0m1.979s 00:25:41.419 user 0m1.770s 00:25:41.419 sys 0m0.105s 00:25:41.419 19:18:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:41.419 ************************************ 00:25:41.419 END TEST thread_poller_perf 00:25:41.419 ************************************ 00:25:41.419 19:18:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.419 19:18:57 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:25:41.419 19:18:57 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:25:41.419 19:18:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:41.419 19:18:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.419 ************************************ 00:25:41.419 START TEST thread_poller_perf 00:25:41.419 ************************************ 00:25:41.419 19:18:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:25:41.419 [2024-04-18 19:18:57.224883] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:41.419 [2024-04-18 19:18:57.225076] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114358 ] 00:25:41.677 [2024-04-18 19:18:57.405302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.935 [2024-04-18 19:18:57.690894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.935 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:25:43.309 ====================================== 00:25:43.309 busy:2104050966 (cyc) 00:25:43.309 total_run_count: 4489000 00:25:43.309 tsc_hz: 2100000000 (cyc) 00:25:43.309 ====================================== 00:25:43.309 poller_cost: 468 (cyc), 222 (nsec) 00:25:43.309 00:25:43.309 real 0m2.020s 00:25:43.309 user 0m1.791s 00:25:43.309 sys 0m0.129s 00:25:43.309 19:18:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:43.309 19:18:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.309 ************************************ 00:25:43.309 END TEST thread_poller_perf 00:25:43.309 ************************************ 00:25:43.601 19:18:59 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:25:43.601 19:18:59 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:25:43.601 19:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:43.601 19:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:43.601 19:18:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.601 ************************************ 00:25:43.601 START TEST thread_spdk_lock 00:25:43.601 ************************************ 00:25:43.601 19:18:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:25:43.601 [2024-04-18 19:18:59.341566] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:43.601 [2024-04-18 19:18:59.341720] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114410 ] 00:25:43.601 [2024-04-18 19:18:59.506431] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:43.858 [2024-04-18 19:18:59.715101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.858 [2024-04-18 19:18:59.715104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.428 [2024-04-18 19:19:00.250363] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:25:44.428 [2024-04-18 19:19:00.250470] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:25:44.428 [2024-04-18 19:19:00.250496] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55d650f14d40 00:25:44.428 [2024-04-18 19:19:00.260451] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:25:44.428 [2024-04-18 19:19:00.260551] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:25:44.428 [2024-04-18 19:19:00.260583] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:25:44.993 Starting test contend 00:25:44.993 Worker Delay Wait us Hold us Total us 00:25:44.993 0 3 127909 199860 327770 00:25:44.993 1 5 50797 301983 352781 00:25:44.993 PASS test contend 00:25:44.993 Starting test hold_by_poller 00:25:44.993 PASS test hold_by_poller 00:25:44.993 Starting test hold_by_message 00:25:44.993 PASS test hold_by_message 00:25:44.993 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:25:44.993 100014 assertions passed 00:25:44.993 0 assertions failed 00:25:44.993 00:25:44.993 real 0m1.382s 00:25:44.993 user 0m1.700s 00:25:44.993 sys 0m0.129s 00:25:44.993 19:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:44.993 ************************************ 00:25:44.993 END TEST thread_spdk_lock 00:25:44.993 ************************************ 00:25:44.993 19:19:00 -- common/autotest_common.sh@10 -- # set +x 00:25:44.993 00:25:44.993 real 0m5.749s 00:25:44.993 user 0m5.456s 00:25:44.993 sys 0m0.536s 00:25:44.993 19:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:44.993 ************************************ 00:25:44.993 END TEST thread 00:25:44.993 ************************************ 00:25:44.993 19:19:00 -- common/autotest_common.sh@10 -- # set +x 00:25:44.993 19:19:00 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:25:44.993 19:19:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:44.993 19:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.993 19:19:00 -- common/autotest_common.sh@10 -- # set +x 00:25:44.993 ************************************ 00:25:44.993 START TEST accel 00:25:44.993 ************************************ 00:25:44.993 19:19:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:25:44.993 * Looking for test storage... 00:25:44.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:25:44.993 19:19:00 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:25:44.993 19:19:00 -- accel/accel.sh@82 -- # get_expected_opcs 00:25:44.993 19:19:00 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:25:45.250 19:19:00 -- accel/accel.sh@62 -- # spdk_tgt_pid=114502 00:25:45.250 19:19:00 -- accel/accel.sh@63 -- # waitforlisten 114502 00:25:45.250 19:19:00 -- common/autotest_common.sh@817 -- # '[' -z 114502 ']' 00:25:45.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.250 19:19:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.250 19:19:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.250 19:19:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.250 19:19:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.250 19:19:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.250 19:19:00 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:25:45.251 19:19:00 -- accel/accel.sh@61 -- # build_accel_config 00:25:45.251 19:19:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:45.251 19:19:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:45.251 19:19:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:45.251 19:19:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:45.251 19:19:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:45.251 19:19:00 -- accel/accel.sh@40 -- # local IFS=, 00:25:45.251 19:19:00 -- accel/accel.sh@41 -- # jq -r . 00:25:45.251 [2024-04-18 19:19:00.991680] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:45.251 [2024-04-18 19:19:00.991997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114502 ] 00:25:45.251 [2024-04-18 19:19:01.161373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.508 [2024-04-18 19:19:01.433979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.883 19:19:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.883 19:19:02 -- common/autotest_common.sh@850 -- # return 0 00:25:46.883 19:19:02 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:25:46.883 19:19:02 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:25:46.883 19:19:02 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:25:46.883 19:19:02 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:25:46.883 19:19:02 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:25:46.883 19:19:02 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:25:46.883 19:19:02 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:25:46.883 19:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.883 19:19:02 -- common/autotest_common.sh@10 -- # set +x 00:25:46.883 19:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # IFS== 00:25:46.883 19:19:02 -- accel/accel.sh@72 -- # read -r opc module 00:25:46.883 19:19:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:25:46.883 19:19:02 -- accel/accel.sh@75 -- # killprocess 114502 00:25:46.883 19:19:02 -- common/autotest_common.sh@936 -- # '[' -z 114502 ']' 00:25:46.883 19:19:02 -- common/autotest_common.sh@940 -- # kill -0 114502 00:25:46.883 19:19:02 -- common/autotest_common.sh@941 -- # uname 00:25:46.883 19:19:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:46.883 19:19:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114502 00:25:46.883 19:19:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:46.883 19:19:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:46.883 killing process with pid 114502 00:25:46.883 19:19:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114502' 00:25:46.883 19:19:02 -- common/autotest_common.sh@955 -- # kill 114502 00:25:46.883 19:19:02 -- common/autotest_common.sh@960 -- # wait 114502 00:25:49.414 19:19:05 -- accel/accel.sh@76 -- # trap - ERR 00:25:49.414 19:19:05 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:25:49.414 19:19:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:49.414 19:19:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:49.414 19:19:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.414 19:19:05 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:25:49.414 19:19:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:25:49.414 19:19:05 -- accel/accel.sh@12 -- # build_accel_config 00:25:49.414 19:19:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:49.414 19:19:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:49.414 19:19:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:49.414 19:19:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:49.414 19:19:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:49.414 19:19:05 -- accel/accel.sh@40 -- # local IFS=, 00:25:49.414 19:19:05 -- accel/accel.sh@41 -- # jq -r . 00:25:49.673 19:19:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:49.673 19:19:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.673 19:19:05 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:25:49.673 19:19:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:25:49.673 19:19:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:49.673 19:19:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.673 ************************************ 00:25:49.673 START TEST accel_missing_filename 00:25:49.673 ************************************ 00:25:49.673 19:19:05 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:25:49.673 19:19:05 -- common/autotest_common.sh@638 -- # local es=0 00:25:49.673 19:19:05 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:25:49.673 19:19:05 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:25:49.673 19:19:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:49.673 19:19:05 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:25:49.673 19:19:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:49.673 19:19:05 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:25:49.673 19:19:05 -- accel/accel.sh@12 -- # build_accel_config 00:25:49.673 19:19:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:25:49.673 19:19:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:49.673 19:19:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:49.673 19:19:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:49.673 19:19:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:49.673 19:19:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:49.673 19:19:05 -- accel/accel.sh@40 -- # local IFS=, 00:25:49.673 19:19:05 -- accel/accel.sh@41 -- # jq -r . 00:25:49.673 [2024-04-18 19:19:05.497556] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:49.673 [2024-04-18 19:19:05.497756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114629 ] 00:25:49.931 [2024-04-18 19:19:05.677905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.188 [2024-04-18 19:19:05.953528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.446 [2024-04-18 19:19:06.175145] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:51.010 [2024-04-18 19:19:06.779654] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:25:51.575 A filename is required. 00:25:51.575 19:19:07 -- common/autotest_common.sh@641 -- # es=234 00:25:51.575 19:19:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:51.575 19:19:07 -- common/autotest_common.sh@650 -- # es=106 00:25:51.575 19:19:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:25:51.575 19:19:07 -- common/autotest_common.sh@658 -- # es=1 00:25:51.575 19:19:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:51.575 00:25:51.575 real 0m1.797s 00:25:51.575 user 0m1.538s 00:25:51.575 sys 0m0.208s 00:25:51.575 19:19:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:51.575 19:19:07 -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 ************************************ 00:25:51.575 END TEST accel_missing_filename 00:25:51.575 ************************************ 00:25:51.575 19:19:07 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:25:51.575 19:19:07 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:25:51.575 19:19:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:51.575 19:19:07 -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 ************************************ 00:25:51.575 START TEST accel_compress_verify 00:25:51.575 ************************************ 00:25:51.575 19:19:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:25:51.575 19:19:07 -- common/autotest_common.sh@638 -- # local es=0 00:25:51.575 19:19:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:25:51.575 19:19:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:25:51.575 19:19:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:51.575 19:19:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:25:51.575 19:19:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:51.575 19:19:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:25:51.575 19:19:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:25:51.575 19:19:07 -- accel/accel.sh@12 -- # build_accel_config 00:25:51.575 19:19:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:51.575 19:19:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:51.575 19:19:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:51.575 19:19:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:51.575 19:19:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:51.575 19:19:07 -- accel/accel.sh@40 -- # local IFS=, 00:25:51.575 19:19:07 -- accel/accel.sh@41 -- # jq -r . 00:25:51.575 [2024-04-18 19:19:07.368094] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:51.575 [2024-04-18 19:19:07.368441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114676 ] 00:25:51.833 [2024-04-18 19:19:07.535388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.090 [2024-04-18 19:19:07.800220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.346 [2024-04-18 19:19:08.029300] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:52.910 [2024-04-18 19:19:08.641099] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:25:53.168 00:25:53.168 Compression does not support the verify option, aborting. 00:25:53.168 19:19:09 -- common/autotest_common.sh@641 -- # es=161 00:25:53.168 19:19:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:53.168 19:19:09 -- common/autotest_common.sh@650 -- # es=33 00:25:53.168 19:19:09 -- common/autotest_common.sh@651 -- # case "$es" in 00:25:53.168 19:19:09 -- common/autotest_common.sh@658 -- # es=1 00:25:53.168 19:19:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:53.168 00:25:53.168 real 0m1.767s 00:25:53.168 user 0m1.550s 00:25:53.168 sys 0m0.161s 00:25:53.168 19:19:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:53.168 19:19:09 -- common/autotest_common.sh@10 -- # set +x 00:25:53.168 ************************************ 00:25:53.168 END TEST accel_compress_verify 00:25:53.168 ************************************ 00:25:53.426 19:19:09 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:25:53.426 19:19:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:25:53.426 19:19:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.426 19:19:09 -- common/autotest_common.sh@10 -- # set +x 00:25:53.426 ************************************ 00:25:53.426 START TEST accel_wrong_workload 00:25:53.426 ************************************ 00:25:53.426 19:19:09 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:25:53.426 19:19:09 -- common/autotest_common.sh@638 -- # local es=0 00:25:53.426 19:19:09 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:25:53.426 19:19:09 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:25:53.426 19:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:53.426 19:19:09 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:25:53.426 19:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:53.426 19:19:09 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:25:53.426 19:19:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:25:53.426 19:19:09 -- accel/accel.sh@12 -- # build_accel_config 00:25:53.426 19:19:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:53.426 19:19:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:53.426 19:19:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:53.426 19:19:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:53.426 19:19:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:53.426 19:19:09 -- accel/accel.sh@40 -- # local IFS=, 00:25:53.426 19:19:09 -- accel/accel.sh@41 -- # jq -r . 00:25:53.426 Unsupported workload type: foobar 00:25:53.426 [2024-04-18 19:19:09.236571] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:25:53.426 accel_perf options: 00:25:53.426 [-h help message] 00:25:53.426 [-q queue depth per core] 00:25:53.426 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:25:53.426 [-T number of threads per core 00:25:53.426 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:25:53.426 [-t time in seconds] 00:25:53.426 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:25:53.426 [ dif_verify, , dif_generate, dif_generate_copy 00:25:53.426 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:25:53.426 [-l for compress/decompress workloads, name of uncompressed input file 00:25:53.426 [-S for crc32c workload, use this seed value (default 0) 00:25:53.426 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:25:53.426 [-f for fill workload, use this BYTE value (default 255) 00:25:53.426 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:25:53.426 [-y verify result if this switch is on] 00:25:53.426 [-a tasks to allocate per core (default: same value as -q)] 00:25:53.426 Can be used to spread operations across a wider range of memory. 00:25:53.426 19:19:09 -- common/autotest_common.sh@641 -- # es=1 00:25:53.426 19:19:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:53.426 19:19:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:53.426 19:19:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:53.426 00:25:53.426 real 0m0.081s 00:25:53.426 user 0m0.089s 00:25:53.426 sys 0m0.057s 00:25:53.426 19:19:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:53.426 19:19:09 -- common/autotest_common.sh@10 -- # set +x 00:25:53.426 ************************************ 00:25:53.426 END TEST accel_wrong_workload 00:25:53.426 ************************************ 00:25:53.426 19:19:09 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:25:53.426 19:19:09 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:25:53.426 19:19:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.426 19:19:09 -- common/autotest_common.sh@10 -- # set +x 00:25:53.686 ************************************ 00:25:53.686 START TEST accel_negative_buffers 00:25:53.686 ************************************ 00:25:53.686 19:19:09 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:25:53.686 19:19:09 -- common/autotest_common.sh@638 -- # local es=0 00:25:53.686 19:19:09 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:25:53.686 19:19:09 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:25:53.686 19:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:53.686 19:19:09 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:25:53.686 19:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:53.686 19:19:09 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:25:53.686 19:19:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:25:53.686 19:19:09 -- accel/accel.sh@12 -- # build_accel_config 00:25:53.686 19:19:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:53.686 19:19:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:53.686 19:19:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:53.686 19:19:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:53.686 19:19:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:53.686 19:19:09 -- accel/accel.sh@40 -- # local IFS=, 00:25:53.686 19:19:09 -- accel/accel.sh@41 -- # jq -r . 00:25:53.686 -x option must be non-negative. 00:25:53.686 [2024-04-18 19:19:09.404053] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:25:53.686 accel_perf options: 00:25:53.686 [-h help message] 00:25:53.686 [-q queue depth per core] 00:25:53.686 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:25:53.686 [-T number of threads per core 00:25:53.686 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:25:53.686 [-t time in seconds] 00:25:53.686 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:25:53.686 [ dif_verify, , dif_generate, dif_generate_copy 00:25:53.686 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:25:53.686 [-l for compress/decompress workloads, name of uncompressed input file 00:25:53.686 [-S for crc32c workload, use this seed value (default 0) 00:25:53.686 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:25:53.686 [-f for fill workload, use this BYTE value (default 255) 00:25:53.686 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:25:53.686 [-y verify result if this switch is on] 00:25:53.686 [-a tasks to allocate per core (default: same value as -q)] 00:25:53.686 Can be used to spread operations across a wider range of memory. 00:25:53.686 19:19:09 -- common/autotest_common.sh@641 -- # es=1 00:25:53.686 19:19:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:53.686 19:19:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:53.686 19:19:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:53.686 00:25:53.686 real 0m0.078s 00:25:53.686 user 0m0.087s 00:25:53.686 sys 0m0.044s 00:25:53.686 19:19:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:53.686 ************************************ 00:25:53.686 END TEST accel_negative_buffers 00:25:53.686 ************************************ 00:25:53.686 19:19:09 -- common/autotest_common.sh@10 -- # set +x 00:25:53.686 19:19:09 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:25:53.686 19:19:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:25:53.686 19:19:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.686 19:19:09 -- common/autotest_common.sh@10 -- # set +x 00:25:53.686 ************************************ 00:25:53.686 START TEST accel_crc32c 00:25:53.686 ************************************ 00:25:53.686 19:19:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:25:53.686 19:19:09 -- accel/accel.sh@16 -- # local accel_opc 00:25:53.686 19:19:09 -- accel/accel.sh@17 -- # local accel_module 00:25:53.686 19:19:09 -- accel/accel.sh@19 -- # IFS=: 00:25:53.686 19:19:09 -- accel/accel.sh@19 -- # read -r var val 00:25:53.686 19:19:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:25:53.686 19:19:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:25:53.686 19:19:09 -- accel/accel.sh@12 -- # build_accel_config 00:25:53.686 19:19:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:53.686 19:19:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:53.686 19:19:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:53.686 19:19:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:53.686 19:19:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:53.686 19:19:09 -- accel/accel.sh@40 -- # local IFS=, 00:25:53.686 19:19:09 -- accel/accel.sh@41 -- # jq -r . 00:25:53.686 [2024-04-18 19:19:09.572521] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:53.686 [2024-04-18 19:19:09.572772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114792 ] 00:25:53.945 [2024-04-18 19:19:09.753318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.202 [2024-04-18 19:19:10.030807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val= 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val= 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val=0x1 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val= 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val= 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val=crc32c 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val=32 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val= 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val=software 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@22 -- # accel_module=software 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val=32 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val=32 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val=1 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val=Yes 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val= 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:54.461 19:19:10 -- accel/accel.sh@20 -- # val= 00:25:54.461 19:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # IFS=: 00:25:54.461 19:19:10 -- accel/accel.sh@19 -- # read -r var val 00:25:57.001 19:19:12 -- accel/accel.sh@20 -- # val= 00:25:57.001 19:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.001 19:19:12 -- accel/accel.sh@19 -- # IFS=: 00:25:57.001 19:19:12 -- accel/accel.sh@19 -- # read -r var val 00:25:57.001 19:19:12 -- accel/accel.sh@20 -- # val= 00:25:57.001 19:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.001 19:19:12 -- accel/accel.sh@19 -- # IFS=: 00:25:57.001 19:19:12 -- accel/accel.sh@19 -- # read -r var val 00:25:57.001 19:19:12 -- accel/accel.sh@20 -- # val= 00:25:57.001 19:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # IFS=: 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # read -r var val 00:25:57.002 19:19:12 -- accel/accel.sh@20 -- # val= 00:25:57.002 19:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # IFS=: 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # read -r var val 00:25:57.002 19:19:12 -- accel/accel.sh@20 -- # val= 00:25:57.002 19:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # IFS=: 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # read -r var val 00:25:57.002 19:19:12 -- accel/accel.sh@20 -- # val= 00:25:57.002 19:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # IFS=: 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # read -r var val 00:25:57.002 19:19:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:25:57.002 19:19:12 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:25:57.002 19:19:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:57.002 00:25:57.002 real 0m3.004s 00:25:57.002 user 0m2.734s 00:25:57.002 sys 0m0.197s 00:25:57.002 19:19:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:57.002 ************************************ 00:25:57.002 END TEST accel_crc32c 00:25:57.002 ************************************ 00:25:57.002 19:19:12 -- common/autotest_common.sh@10 -- # set +x 00:25:57.002 19:19:12 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:25:57.002 19:19:12 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:25:57.002 19:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:57.002 19:19:12 -- common/autotest_common.sh@10 -- # set +x 00:25:57.002 ************************************ 00:25:57.002 START TEST accel_crc32c_C2 00:25:57.002 ************************************ 00:25:57.002 19:19:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:25:57.002 19:19:12 -- accel/accel.sh@16 -- # local accel_opc 00:25:57.002 19:19:12 -- accel/accel.sh@17 -- # local accel_module 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # IFS=: 00:25:57.002 19:19:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:25:57.002 19:19:12 -- accel/accel.sh@19 -- # read -r var val 00:25:57.002 19:19:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:25:57.002 19:19:12 -- accel/accel.sh@12 -- # build_accel_config 00:25:57.002 19:19:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:57.002 19:19:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:57.002 19:19:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:57.002 19:19:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:57.002 19:19:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:57.002 19:19:12 -- accel/accel.sh@40 -- # local IFS=, 00:25:57.002 19:19:12 -- accel/accel.sh@41 -- # jq -r . 00:25:57.002 [2024-04-18 19:19:12.670461] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:57.002 [2024-04-18 19:19:12.670713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114869 ] 00:25:57.002 [2024-04-18 19:19:12.850647] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.260 [2024-04-18 19:19:13.144003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val= 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val= 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val=0x1 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val= 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val= 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val=crc32c 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val=0 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val= 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val=software 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@22 -- # accel_module=software 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val=32 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val=32 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val=1 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val=Yes 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.518 19:19:13 -- accel/accel.sh@20 -- # val= 00:25:57.518 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.518 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:57.777 19:19:13 -- accel/accel.sh@20 -- # val= 00:25:57.777 19:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:25:57.777 19:19:13 -- accel/accel.sh@19 -- # IFS=: 00:25:57.777 19:19:13 -- accel/accel.sh@19 -- # read -r var val 00:25:59.691 19:19:15 -- accel/accel.sh@20 -- # val= 00:25:59.691 19:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # IFS=: 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # read -r var val 00:25:59.691 19:19:15 -- accel/accel.sh@20 -- # val= 00:25:59.691 19:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # IFS=: 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # read -r var val 00:25:59.691 19:19:15 -- accel/accel.sh@20 -- # val= 00:25:59.691 19:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # IFS=: 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # read -r var val 00:25:59.691 19:19:15 -- accel/accel.sh@20 -- # val= 00:25:59.691 19:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # IFS=: 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # read -r var val 00:25:59.691 19:19:15 -- accel/accel.sh@20 -- # val= 00:25:59.691 19:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # IFS=: 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # read -r var val 00:25:59.691 19:19:15 -- accel/accel.sh@20 -- # val= 00:25:59.691 19:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # IFS=: 00:25:59.691 19:19:15 -- accel/accel.sh@19 -- # read -r var val 00:25:59.691 19:19:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:25:59.691 19:19:15 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:25:59.691 19:19:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:59.691 00:25:59.691 real 0m2.927s 00:25:59.691 user 0m2.685s 00:25:59.691 sys 0m0.188s 00:25:59.691 ************************************ 00:25:59.691 END TEST accel_crc32c_C2 00:25:59.691 ************************************ 00:25:59.691 19:19:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:59.691 19:19:15 -- common/autotest_common.sh@10 -- # set +x 00:25:59.691 19:19:15 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:25:59.691 19:19:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:25:59.691 19:19:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:59.691 19:19:15 -- common/autotest_common.sh@10 -- # set +x 00:25:59.949 ************************************ 00:25:59.949 START TEST accel_copy 00:25:59.949 ************************************ 00:25:59.949 19:19:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:25:59.949 19:19:15 -- accel/accel.sh@16 -- # local accel_opc 00:25:59.949 19:19:15 -- accel/accel.sh@17 -- # local accel_module 00:25:59.949 19:19:15 -- accel/accel.sh@19 -- # IFS=: 00:25:59.949 19:19:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:25:59.949 19:19:15 -- accel/accel.sh@19 -- # read -r var val 00:25:59.949 19:19:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:25:59.949 19:19:15 -- accel/accel.sh@12 -- # build_accel_config 00:25:59.949 19:19:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:59.949 19:19:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:59.949 19:19:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:59.949 19:19:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:59.949 19:19:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:59.949 19:19:15 -- accel/accel.sh@40 -- # local IFS=, 00:25:59.949 19:19:15 -- accel/accel.sh@41 -- # jq -r . 00:25:59.949 [2024-04-18 19:19:15.681247] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:25:59.949 [2024-04-18 19:19:15.681467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114939 ] 00:25:59.949 [2024-04-18 19:19:15.859685] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.207 [2024-04-18 19:19:16.112805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.464 19:19:16 -- accel/accel.sh@20 -- # val= 00:26:00.464 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.464 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.464 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.464 19:19:16 -- accel/accel.sh@20 -- # val= 00:26:00.464 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.464 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.464 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.464 19:19:16 -- accel/accel.sh@20 -- # val=0x1 00:26:00.464 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.464 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.464 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.464 19:19:16 -- accel/accel.sh@20 -- # val= 00:26:00.464 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.464 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val= 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val=copy 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@23 -- # accel_opc=copy 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val= 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val=software 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@22 -- # accel_module=software 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val=32 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val=32 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val=1 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val=Yes 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val= 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:00.465 19:19:16 -- accel/accel.sh@20 -- # val= 00:26:00.465 19:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # IFS=: 00:26:00.465 19:19:16 -- accel/accel.sh@19 -- # read -r var val 00:26:02.993 19:19:18 -- accel/accel.sh@20 -- # val= 00:26:02.993 19:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # IFS=: 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # read -r var val 00:26:02.993 19:19:18 -- accel/accel.sh@20 -- # val= 00:26:02.993 19:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # IFS=: 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # read -r var val 00:26:02.993 19:19:18 -- accel/accel.sh@20 -- # val= 00:26:02.993 19:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # IFS=: 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # read -r var val 00:26:02.993 19:19:18 -- accel/accel.sh@20 -- # val= 00:26:02.993 19:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # IFS=: 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # read -r var val 00:26:02.993 19:19:18 -- accel/accel.sh@20 -- # val= 00:26:02.993 19:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # IFS=: 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # read -r var val 00:26:02.993 19:19:18 -- accel/accel.sh@20 -- # val= 00:26:02.993 19:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # IFS=: 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # read -r var val 00:26:02.993 ************************************ 00:26:02.993 END TEST accel_copy 00:26:02.993 ************************************ 00:26:02.993 19:19:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:02.993 19:19:18 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:26:02.993 19:19:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:02.993 00:26:02.993 real 0m2.853s 00:26:02.993 user 0m2.592s 00:26:02.993 sys 0m0.189s 00:26:02.993 19:19:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:02.993 19:19:18 -- common/autotest_common.sh@10 -- # set +x 00:26:02.993 19:19:18 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:02.993 19:19:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:26:02.993 19:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:02.993 19:19:18 -- common/autotest_common.sh@10 -- # set +x 00:26:02.993 ************************************ 00:26:02.993 START TEST accel_fill 00:26:02.993 ************************************ 00:26:02.993 19:19:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:02.993 19:19:18 -- accel/accel.sh@16 -- # local accel_opc 00:26:02.993 19:19:18 -- accel/accel.sh@17 -- # local accel_module 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # IFS=: 00:26:02.993 19:19:18 -- accel/accel.sh@19 -- # read -r var val 00:26:02.993 19:19:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:02.993 19:19:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:02.993 19:19:18 -- accel/accel.sh@12 -- # build_accel_config 00:26:02.993 19:19:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:02.993 19:19:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:02.993 19:19:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:02.993 19:19:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:02.993 19:19:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:02.993 19:19:18 -- accel/accel.sh@40 -- # local IFS=, 00:26:02.993 19:19:18 -- accel/accel.sh@41 -- # jq -r . 00:26:02.993 [2024-04-18 19:19:18.625485] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:02.993 [2024-04-18 19:19:18.625699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114998 ] 00:26:02.993 [2024-04-18 19:19:18.815217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.251 [2024-04-18 19:19:19.097120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val= 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val= 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val=0x1 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val= 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val= 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val=fill 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@23 -- # accel_opc=fill 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val=0x80 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val= 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val=software 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@22 -- # accel_module=software 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val=64 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.509 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.509 19:19:19 -- accel/accel.sh@20 -- # val=64 00:26:03.509 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.510 19:19:19 -- accel/accel.sh@20 -- # val=1 00:26:03.510 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.510 19:19:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:03.510 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.510 19:19:19 -- accel/accel.sh@20 -- # val=Yes 00:26:03.510 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.510 19:19:19 -- accel/accel.sh@20 -- # val= 00:26:03.510 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:03.510 19:19:19 -- accel/accel.sh@20 -- # val= 00:26:03.510 19:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # IFS=: 00:26:03.510 19:19:19 -- accel/accel.sh@19 -- # read -r var val 00:26:06.063 19:19:21 -- accel/accel.sh@20 -- # val= 00:26:06.063 19:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # IFS=: 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # read -r var val 00:26:06.063 19:19:21 -- accel/accel.sh@20 -- # val= 00:26:06.063 19:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # IFS=: 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # read -r var val 00:26:06.063 19:19:21 -- accel/accel.sh@20 -- # val= 00:26:06.063 19:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # IFS=: 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # read -r var val 00:26:06.063 19:19:21 -- accel/accel.sh@20 -- # val= 00:26:06.063 19:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # IFS=: 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # read -r var val 00:26:06.063 19:19:21 -- accel/accel.sh@20 -- # val= 00:26:06.063 19:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # IFS=: 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # read -r var val 00:26:06.063 19:19:21 -- accel/accel.sh@20 -- # val= 00:26:06.063 19:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # IFS=: 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # read -r var val 00:26:06.063 19:19:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:06.063 19:19:21 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:26:06.063 19:19:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:06.063 00:26:06.063 real 0m2.948s 00:26:06.063 user 0m2.703s 00:26:06.063 sys 0m0.187s 00:26:06.063 19:19:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:06.063 19:19:21 -- common/autotest_common.sh@10 -- # set +x 00:26:06.063 ************************************ 00:26:06.063 END TEST accel_fill 00:26:06.063 ************************************ 00:26:06.063 19:19:21 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:26:06.063 19:19:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:26:06.063 19:19:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.063 19:19:21 -- common/autotest_common.sh@10 -- # set +x 00:26:06.063 ************************************ 00:26:06.063 START TEST accel_copy_crc32c 00:26:06.063 ************************************ 00:26:06.063 19:19:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:26:06.063 19:19:21 -- accel/accel.sh@16 -- # local accel_opc 00:26:06.063 19:19:21 -- accel/accel.sh@17 -- # local accel_module 00:26:06.063 19:19:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # IFS=: 00:26:06.063 19:19:21 -- accel/accel.sh@19 -- # read -r var val 00:26:06.064 19:19:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:26:06.064 19:19:21 -- accel/accel.sh@12 -- # build_accel_config 00:26:06.064 19:19:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:06.064 19:19:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:06.064 19:19:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:06.064 19:19:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:06.064 19:19:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:06.064 19:19:21 -- accel/accel.sh@40 -- # local IFS=, 00:26:06.064 19:19:21 -- accel/accel.sh@41 -- # jq -r . 00:26:06.064 [2024-04-18 19:19:21.639967] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:06.064 [2024-04-18 19:19:21.640130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115065 ] 00:26:06.064 [2024-04-18 19:19:21.807511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.336 [2024-04-18 19:19:22.084284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.607 19:19:22 -- accel/accel.sh@20 -- # val= 00:26:06.607 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.607 19:19:22 -- accel/accel.sh@20 -- # val= 00:26:06.607 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.607 19:19:22 -- accel/accel.sh@20 -- # val=0x1 00:26:06.607 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.607 19:19:22 -- accel/accel.sh@20 -- # val= 00:26:06.607 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.607 19:19:22 -- accel/accel.sh@20 -- # val= 00:26:06.607 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.607 19:19:22 -- accel/accel.sh@20 -- # val=copy_crc32c 00:26:06.607 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.607 19:19:22 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.607 19:19:22 -- accel/accel.sh@20 -- # val=0 00:26:06.607 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.607 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.607 19:19:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val= 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val=software 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@22 -- # accel_module=software 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val=32 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val=32 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val=1 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val=Yes 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val= 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:06.608 19:19:22 -- accel/accel.sh@20 -- # val= 00:26:06.608 19:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # IFS=: 00:26:06.608 19:19:22 -- accel/accel.sh@19 -- # read -r var val 00:26:08.510 19:19:24 -- accel/accel.sh@20 -- # val= 00:26:08.510 19:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # IFS=: 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # read -r var val 00:26:08.510 19:19:24 -- accel/accel.sh@20 -- # val= 00:26:08.510 19:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # IFS=: 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # read -r var val 00:26:08.510 19:19:24 -- accel/accel.sh@20 -- # val= 00:26:08.510 19:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # IFS=: 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # read -r var val 00:26:08.510 19:19:24 -- accel/accel.sh@20 -- # val= 00:26:08.510 19:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # IFS=: 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # read -r var val 00:26:08.510 19:19:24 -- accel/accel.sh@20 -- # val= 00:26:08.510 19:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # IFS=: 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # read -r var val 00:26:08.510 19:19:24 -- accel/accel.sh@20 -- # val= 00:26:08.510 19:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # IFS=: 00:26:08.510 19:19:24 -- accel/accel.sh@19 -- # read -r var val 00:26:08.510 19:19:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:08.510 19:19:24 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:26:08.510 19:19:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:08.510 00:26:08.510 real 0m2.839s 00:26:08.510 user 0m2.563s 00:26:08.510 sys 0m0.192s 00:26:08.510 19:19:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:08.510 19:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:08.510 ************************************ 00:26:08.510 END TEST accel_copy_crc32c 00:26:08.510 ************************************ 00:26:08.767 19:19:24 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:26:08.767 19:19:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:26:08.767 19:19:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:08.767 19:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:08.767 ************************************ 00:26:08.767 START TEST accel_copy_crc32c_C2 00:26:08.767 ************************************ 00:26:08.767 19:19:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:26:08.767 19:19:24 -- accel/accel.sh@16 -- # local accel_opc 00:26:08.767 19:19:24 -- accel/accel.sh@17 -- # local accel_module 00:26:08.767 19:19:24 -- accel/accel.sh@19 -- # IFS=: 00:26:08.767 19:19:24 -- accel/accel.sh@19 -- # read -r var val 00:26:08.767 19:19:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:26:08.767 19:19:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:26:08.767 19:19:24 -- accel/accel.sh@12 -- # build_accel_config 00:26:08.767 19:19:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:08.767 19:19:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:08.767 19:19:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:08.767 19:19:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:08.767 19:19:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:08.768 19:19:24 -- accel/accel.sh@40 -- # local IFS=, 00:26:08.768 19:19:24 -- accel/accel.sh@41 -- # jq -r . 00:26:08.768 [2024-04-18 19:19:24.578933] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:08.768 [2024-04-18 19:19:24.579159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115145 ] 00:26:09.025 [2024-04-18 19:19:24.763525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.282 [2024-04-18 19:19:24.996220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val= 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val= 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val=0x1 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val= 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val= 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val=copy_crc32c 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val=0 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val='8192 bytes' 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val= 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val=software 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@22 -- # accel_module=software 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val=32 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val=32 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val=1 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val=Yes 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val= 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:09.541 19:19:25 -- accel/accel.sh@20 -- # val= 00:26:09.541 19:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # IFS=: 00:26:09.541 19:19:25 -- accel/accel.sh@19 -- # read -r var val 00:26:11.442 19:19:27 -- accel/accel.sh@20 -- # val= 00:26:11.442 19:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # IFS=: 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # read -r var val 00:26:11.442 19:19:27 -- accel/accel.sh@20 -- # val= 00:26:11.442 19:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # IFS=: 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # read -r var val 00:26:11.442 19:19:27 -- accel/accel.sh@20 -- # val= 00:26:11.442 19:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # IFS=: 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # read -r var val 00:26:11.442 19:19:27 -- accel/accel.sh@20 -- # val= 00:26:11.442 19:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # IFS=: 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # read -r var val 00:26:11.442 19:19:27 -- accel/accel.sh@20 -- # val= 00:26:11.442 19:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # IFS=: 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # read -r var val 00:26:11.442 19:19:27 -- accel/accel.sh@20 -- # val= 00:26:11.442 19:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # IFS=: 00:26:11.442 19:19:27 -- accel/accel.sh@19 -- # read -r var val 00:26:11.442 19:19:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:11.442 19:19:27 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:26:11.442 19:19:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:11.442 00:26:11.442 real 0m2.817s 00:26:11.442 user 0m2.549s 00:26:11.442 sys 0m0.200s 00:26:11.442 19:19:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:11.442 19:19:27 -- common/autotest_common.sh@10 -- # set +x 00:26:11.442 ************************************ 00:26:11.442 END TEST accel_copy_crc32c_C2 00:26:11.442 ************************************ 00:26:11.700 19:19:27 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:26:11.700 19:19:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:26:11.700 19:19:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:11.700 19:19:27 -- common/autotest_common.sh@10 -- # set +x 00:26:11.700 ************************************ 00:26:11.700 START TEST accel_dualcast 00:26:11.700 ************************************ 00:26:11.700 19:19:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:26:11.700 19:19:27 -- accel/accel.sh@16 -- # local accel_opc 00:26:11.700 19:19:27 -- accel/accel.sh@17 -- # local accel_module 00:26:11.700 19:19:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:26:11.700 19:19:27 -- accel/accel.sh@19 -- # IFS=: 00:26:11.700 19:19:27 -- accel/accel.sh@19 -- # read -r var val 00:26:11.700 19:19:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:26:11.700 19:19:27 -- accel/accel.sh@12 -- # build_accel_config 00:26:11.700 19:19:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:11.700 19:19:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:11.700 19:19:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:11.700 19:19:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:11.700 19:19:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:11.700 19:19:27 -- accel/accel.sh@40 -- # local IFS=, 00:26:11.700 19:19:27 -- accel/accel.sh@41 -- # jq -r . 00:26:11.700 [2024-04-18 19:19:27.495841] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:11.700 [2024-04-18 19:19:27.496075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115212 ] 00:26:11.979 [2024-04-18 19:19:27.677868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.252 [2024-04-18 19:19:27.916054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val= 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val= 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val=0x1 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val= 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val= 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val=dualcast 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val= 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val=software 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@22 -- # accel_module=software 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val=32 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val=32 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val=1 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val=Yes 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val= 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:12.252 19:19:28 -- accel/accel.sh@20 -- # val= 00:26:12.252 19:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # IFS=: 00:26:12.252 19:19:28 -- accel/accel.sh@19 -- # read -r var val 00:26:14.796 19:19:30 -- accel/accel.sh@20 -- # val= 00:26:14.796 19:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # IFS=: 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # read -r var val 00:26:14.796 19:19:30 -- accel/accel.sh@20 -- # val= 00:26:14.796 19:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # IFS=: 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # read -r var val 00:26:14.796 19:19:30 -- accel/accel.sh@20 -- # val= 00:26:14.796 19:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # IFS=: 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # read -r var val 00:26:14.796 19:19:30 -- accel/accel.sh@20 -- # val= 00:26:14.796 19:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # IFS=: 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # read -r var val 00:26:14.796 19:19:30 -- accel/accel.sh@20 -- # val= 00:26:14.796 19:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # IFS=: 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # read -r var val 00:26:14.796 19:19:30 -- accel/accel.sh@20 -- # val= 00:26:14.796 19:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # IFS=: 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # read -r var val 00:26:14.796 19:19:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:14.796 19:19:30 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:26:14.796 19:19:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:14.796 00:26:14.796 real 0m2.823s 00:26:14.796 user 0m2.537s 00:26:14.796 sys 0m0.205s 00:26:14.796 19:19:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:14.796 19:19:30 -- common/autotest_common.sh@10 -- # set +x 00:26:14.796 ************************************ 00:26:14.796 END TEST accel_dualcast 00:26:14.796 ************************************ 00:26:14.796 19:19:30 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:26:14.796 19:19:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:26:14.796 19:19:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:14.796 19:19:30 -- common/autotest_common.sh@10 -- # set +x 00:26:14.796 ************************************ 00:26:14.796 START TEST accel_compare 00:26:14.796 ************************************ 00:26:14.796 19:19:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:26:14.796 19:19:30 -- accel/accel.sh@16 -- # local accel_opc 00:26:14.796 19:19:30 -- accel/accel.sh@17 -- # local accel_module 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # IFS=: 00:26:14.796 19:19:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:26:14.796 19:19:30 -- accel/accel.sh@19 -- # read -r var val 00:26:14.796 19:19:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:26:14.796 19:19:30 -- accel/accel.sh@12 -- # build_accel_config 00:26:14.796 19:19:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:14.796 19:19:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:14.796 19:19:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:14.796 19:19:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:14.796 19:19:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:14.796 19:19:30 -- accel/accel.sh@40 -- # local IFS=, 00:26:14.796 19:19:30 -- accel/accel.sh@41 -- # jq -r . 00:26:14.796 [2024-04-18 19:19:30.389119] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:14.796 [2024-04-18 19:19:30.389269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115274 ] 00:26:14.796 [2024-04-18 19:19:30.552997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.054 [2024-04-18 19:19:30.790216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val= 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val= 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val=0x1 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val= 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val= 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val=compare 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@23 -- # accel_opc=compare 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val= 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val=software 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@22 -- # accel_module=software 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val=32 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val=32 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val=1 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val=Yes 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val= 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:15.313 19:19:31 -- accel/accel.sh@20 -- # val= 00:26:15.313 19:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # IFS=: 00:26:15.313 19:19:31 -- accel/accel.sh@19 -- # read -r var val 00:26:17.842 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:17.842 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:17.842 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:17.842 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:17.842 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:17.842 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:17.842 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:17.842 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:17.842 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:17.842 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:17.842 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:17.842 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:17.843 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:17.843 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:17.843 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:17.843 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:17.843 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:17.843 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:17.843 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:17.843 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:17.843 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:17.843 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:17.843 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:17.843 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:17.843 19:19:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:17.843 19:19:33 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:26:17.843 19:19:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.843 00:26:17.843 real 0m2.810s 00:26:17.843 user 0m2.557s 00:26:17.843 sys 0m0.187s 00:26:17.843 19:19:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:17.843 19:19:33 -- common/autotest_common.sh@10 -- # set +x 00:26:17.843 ************************************ 00:26:17.843 END TEST accel_compare 00:26:17.843 ************************************ 00:26:17.843 19:19:33 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:26:17.843 19:19:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:26:17.843 19:19:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:17.843 19:19:33 -- common/autotest_common.sh@10 -- # set +x 00:26:17.843 ************************************ 00:26:17.843 START TEST accel_xor 00:26:17.843 ************************************ 00:26:17.843 19:19:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:26:17.843 19:19:33 -- accel/accel.sh@16 -- # local accel_opc 00:26:17.843 19:19:33 -- accel/accel.sh@17 -- # local accel_module 00:26:17.843 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:17.843 19:19:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:26:17.843 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:17.843 19:19:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:26:17.843 19:19:33 -- accel/accel.sh@12 -- # build_accel_config 00:26:17.843 19:19:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:17.843 19:19:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:17.843 19:19:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:17.843 19:19:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:17.843 19:19:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:17.843 19:19:33 -- accel/accel.sh@40 -- # local IFS=, 00:26:17.843 19:19:33 -- accel/accel.sh@41 -- # jq -r . 00:26:17.843 [2024-04-18 19:19:33.318103] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:17.843 [2024-04-18 19:19:33.318341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115362 ] 00:26:17.843 [2024-04-18 19:19:33.492681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.843 [2024-04-18 19:19:33.731281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val=0x1 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val=xor 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@23 -- # accel_opc=xor 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val=2 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val=software 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@22 -- # accel_module=software 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val=32 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val=32 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.102 19:19:33 -- accel/accel.sh@20 -- # val=1 00:26:18.102 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.102 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.103 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.103 19:19:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:18.103 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.103 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.103 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.103 19:19:33 -- accel/accel.sh@20 -- # val=Yes 00:26:18.103 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.103 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.103 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.103 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:18.103 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.103 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.103 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:18.103 19:19:33 -- accel/accel.sh@20 -- # val= 00:26:18.103 19:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:26:18.103 19:19:33 -- accel/accel.sh@19 -- # IFS=: 00:26:18.103 19:19:33 -- accel/accel.sh@19 -- # read -r var val 00:26:20.677 19:19:35 -- accel/accel.sh@20 -- # val= 00:26:20.677 19:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # IFS=: 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # read -r var val 00:26:20.677 19:19:35 -- accel/accel.sh@20 -- # val= 00:26:20.677 19:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # IFS=: 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # read -r var val 00:26:20.677 19:19:35 -- accel/accel.sh@20 -- # val= 00:26:20.677 19:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # IFS=: 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # read -r var val 00:26:20.677 19:19:35 -- accel/accel.sh@20 -- # val= 00:26:20.677 19:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # IFS=: 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # read -r var val 00:26:20.677 19:19:35 -- accel/accel.sh@20 -- # val= 00:26:20.677 19:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # IFS=: 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # read -r var val 00:26:20.677 19:19:35 -- accel/accel.sh@20 -- # val= 00:26:20.677 19:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # IFS=: 00:26:20.677 19:19:35 -- accel/accel.sh@19 -- # read -r var val 00:26:20.677 19:19:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:20.677 19:19:36 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:26:20.677 19:19:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:20.677 00:26:20.677 real 0m2.744s 00:26:20.677 user 0m2.484s 00:26:20.677 sys 0m0.184s 00:26:20.677 19:19:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:20.677 19:19:36 -- common/autotest_common.sh@10 -- # set +x 00:26:20.677 ************************************ 00:26:20.677 END TEST accel_xor 00:26:20.677 ************************************ 00:26:20.677 19:19:36 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:26:20.677 19:19:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:26:20.677 19:19:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:20.677 19:19:36 -- common/autotest_common.sh@10 -- # set +x 00:26:20.677 ************************************ 00:26:20.677 START TEST accel_xor 00:26:20.677 ************************************ 00:26:20.677 19:19:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:26:20.677 19:19:36 -- accel/accel.sh@16 -- # local accel_opc 00:26:20.677 19:19:36 -- accel/accel.sh@17 -- # local accel_module 00:26:20.677 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.677 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.677 19:19:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:26:20.677 19:19:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:26:20.677 19:19:36 -- accel/accel.sh@12 -- # build_accel_config 00:26:20.677 19:19:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:20.677 19:19:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:20.677 19:19:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:20.677 19:19:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:20.677 19:19:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:20.677 19:19:36 -- accel/accel.sh@40 -- # local IFS=, 00:26:20.677 19:19:36 -- accel/accel.sh@41 -- # jq -r . 00:26:20.677 [2024-04-18 19:19:36.159146] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:20.677 [2024-04-18 19:19:36.159386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115424 ] 00:26:20.677 [2024-04-18 19:19:36.341388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.935 [2024-04-18 19:19:36.630300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.935 19:19:36 -- accel/accel.sh@20 -- # val= 00:26:20.935 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.935 19:19:36 -- accel/accel.sh@20 -- # val= 00:26:20.935 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.935 19:19:36 -- accel/accel.sh@20 -- # val=0x1 00:26:20.935 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.935 19:19:36 -- accel/accel.sh@20 -- # val= 00:26:20.935 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.935 19:19:36 -- accel/accel.sh@20 -- # val= 00:26:20.935 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.935 19:19:36 -- accel/accel.sh@20 -- # val=xor 00:26:20.935 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.935 19:19:36 -- accel/accel.sh@23 -- # accel_opc=xor 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.935 19:19:36 -- accel/accel.sh@20 -- # val=3 00:26:20.935 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.935 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.935 19:19:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:20.936 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.936 19:19:36 -- accel/accel.sh@20 -- # val= 00:26:20.936 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.936 19:19:36 -- accel/accel.sh@20 -- # val=software 00:26:20.936 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.936 19:19:36 -- accel/accel.sh@22 -- # accel_module=software 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.936 19:19:36 -- accel/accel.sh@20 -- # val=32 00:26:20.936 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.936 19:19:36 -- accel/accel.sh@20 -- # val=32 00:26:20.936 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.936 19:19:36 -- accel/accel.sh@20 -- # val=1 00:26:20.936 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.936 19:19:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:20.936 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:20.936 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:20.936 19:19:36 -- accel/accel.sh@20 -- # val=Yes 00:26:21.194 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:21.194 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:21.194 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:21.194 19:19:36 -- accel/accel.sh@20 -- # val= 00:26:21.194 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:21.194 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:21.194 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:21.194 19:19:36 -- accel/accel.sh@20 -- # val= 00:26:21.194 19:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:26:21.194 19:19:36 -- accel/accel.sh@19 -- # IFS=: 00:26:21.194 19:19:36 -- accel/accel.sh@19 -- # read -r var val 00:26:23.103 19:19:38 -- accel/accel.sh@20 -- # val= 00:26:23.103 19:19:38 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # IFS=: 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # read -r var val 00:26:23.103 19:19:38 -- accel/accel.sh@20 -- # val= 00:26:23.103 19:19:38 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # IFS=: 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # read -r var val 00:26:23.103 19:19:38 -- accel/accel.sh@20 -- # val= 00:26:23.103 19:19:38 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # IFS=: 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # read -r var val 00:26:23.103 19:19:38 -- accel/accel.sh@20 -- # val= 00:26:23.103 19:19:38 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # IFS=: 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # read -r var val 00:26:23.103 19:19:38 -- accel/accel.sh@20 -- # val= 00:26:23.103 19:19:38 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # IFS=: 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # read -r var val 00:26:23.103 19:19:38 -- accel/accel.sh@20 -- # val= 00:26:23.103 19:19:38 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # IFS=: 00:26:23.103 19:19:38 -- accel/accel.sh@19 -- # read -r var val 00:26:23.103 19:19:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:23.103 19:19:38 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:26:23.103 19:19:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:23.103 00:26:23.103 real 0m2.771s 00:26:23.103 user 0m2.494s 00:26:23.103 sys 0m0.195s 00:26:23.103 19:19:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:23.103 19:19:38 -- common/autotest_common.sh@10 -- # set +x 00:26:23.103 ************************************ 00:26:23.103 END TEST accel_xor 00:26:23.103 ************************************ 00:26:23.103 19:19:38 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:26:23.103 19:19:38 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:23.103 19:19:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:23.103 19:19:38 -- common/autotest_common.sh@10 -- # set +x 00:26:23.104 ************************************ 00:26:23.104 START TEST accel_dif_verify 00:26:23.104 ************************************ 00:26:23.104 19:19:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:26:23.104 19:19:38 -- accel/accel.sh@16 -- # local accel_opc 00:26:23.104 19:19:38 -- accel/accel.sh@17 -- # local accel_module 00:26:23.104 19:19:38 -- accel/accel.sh@19 -- # IFS=: 00:26:23.104 19:19:38 -- accel/accel.sh@19 -- # read -r var val 00:26:23.104 19:19:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:26:23.104 19:19:38 -- accel/accel.sh@12 -- # build_accel_config 00:26:23.104 19:19:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:26:23.104 19:19:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:23.104 19:19:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:23.104 19:19:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:23.104 19:19:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:23.104 19:19:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:23.104 19:19:38 -- accel/accel.sh@40 -- # local IFS=, 00:26:23.104 19:19:38 -- accel/accel.sh@41 -- # jq -r . 00:26:23.104 [2024-04-18 19:19:39.005590] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:23.104 [2024-04-18 19:19:39.005758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115482 ] 00:26:23.360 [2024-04-18 19:19:39.171114] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.617 [2024-04-18 19:19:39.407763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val= 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val= 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val=0x1 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val= 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val= 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val=dif_verify 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val='512 bytes' 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val='8 bytes' 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val= 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val=software 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@22 -- # accel_module=software 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val=32 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val=32 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val=1 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val=No 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val= 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:23.877 19:19:39 -- accel/accel.sh@20 -- # val= 00:26:23.877 19:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # IFS=: 00:26:23.877 19:19:39 -- accel/accel.sh@19 -- # read -r var val 00:26:26.409 19:19:41 -- accel/accel.sh@20 -- # val= 00:26:26.409 19:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # IFS=: 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # read -r var val 00:26:26.409 19:19:41 -- accel/accel.sh@20 -- # val= 00:26:26.409 19:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # IFS=: 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # read -r var val 00:26:26.409 19:19:41 -- accel/accel.sh@20 -- # val= 00:26:26.409 19:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # IFS=: 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # read -r var val 00:26:26.409 19:19:41 -- accel/accel.sh@20 -- # val= 00:26:26.409 19:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # IFS=: 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # read -r var val 00:26:26.409 19:19:41 -- accel/accel.sh@20 -- # val= 00:26:26.409 19:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # IFS=: 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # read -r var val 00:26:26.409 19:19:41 -- accel/accel.sh@20 -- # val= 00:26:26.409 19:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # IFS=: 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # read -r var val 00:26:26.409 19:19:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:26.409 19:19:41 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:26:26.409 19:19:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:26.409 00:26:26.409 real 0m2.768s 00:26:26.409 user 0m2.511s 00:26:26.409 sys 0m0.178s 00:26:26.409 ************************************ 00:26:26.409 END TEST accel_dif_verify 00:26:26.409 ************************************ 00:26:26.409 19:19:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:26.409 19:19:41 -- common/autotest_common.sh@10 -- # set +x 00:26:26.409 19:19:41 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:26:26.409 19:19:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:26.409 19:19:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:26.409 19:19:41 -- common/autotest_common.sh@10 -- # set +x 00:26:26.409 ************************************ 00:26:26.409 START TEST accel_dif_generate 00:26:26.409 ************************************ 00:26:26.409 19:19:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:26:26.409 19:19:41 -- accel/accel.sh@16 -- # local accel_opc 00:26:26.409 19:19:41 -- accel/accel.sh@17 -- # local accel_module 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # IFS=: 00:26:26.409 19:19:41 -- accel/accel.sh@19 -- # read -r var val 00:26:26.409 19:19:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:26:26.409 19:19:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:26:26.409 19:19:41 -- accel/accel.sh@12 -- # build_accel_config 00:26:26.409 19:19:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:26.409 19:19:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:26.409 19:19:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:26.409 19:19:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:26.409 19:19:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:26.409 19:19:41 -- accel/accel.sh@40 -- # local IFS=, 00:26:26.409 19:19:41 -- accel/accel.sh@41 -- # jq -r . 00:26:26.409 [2024-04-18 19:19:41.860275] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:26.409 [2024-04-18 19:19:41.860616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115544 ] 00:26:26.409 [2024-04-18 19:19:42.024969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.409 [2024-04-18 19:19:42.319574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.665 19:19:42 -- accel/accel.sh@20 -- # val= 00:26:26.665 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.665 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.665 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.665 19:19:42 -- accel/accel.sh@20 -- # val= 00:26:26.665 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.665 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.665 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.665 19:19:42 -- accel/accel.sh@20 -- # val=0x1 00:26:26.665 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.665 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.665 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.665 19:19:42 -- accel/accel.sh@20 -- # val= 00:26:26.665 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.665 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.665 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val= 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val=dif_generate 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val='512 bytes' 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val='8 bytes' 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val= 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val=software 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@22 -- # accel_module=software 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val=32 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val=32 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val=1 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val=No 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val= 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:26.666 19:19:42 -- accel/accel.sh@20 -- # val= 00:26:26.666 19:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # IFS=: 00:26:26.666 19:19:42 -- accel/accel.sh@19 -- # read -r var val 00:26:29.198 19:19:44 -- accel/accel.sh@20 -- # val= 00:26:29.198 19:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # IFS=: 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # read -r var val 00:26:29.198 19:19:44 -- accel/accel.sh@20 -- # val= 00:26:29.198 19:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # IFS=: 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # read -r var val 00:26:29.198 19:19:44 -- accel/accel.sh@20 -- # val= 00:26:29.198 19:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # IFS=: 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # read -r var val 00:26:29.198 19:19:44 -- accel/accel.sh@20 -- # val= 00:26:29.198 19:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # IFS=: 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # read -r var val 00:26:29.198 19:19:44 -- accel/accel.sh@20 -- # val= 00:26:29.198 19:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # IFS=: 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # read -r var val 00:26:29.198 19:19:44 -- accel/accel.sh@20 -- # val= 00:26:29.198 19:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # IFS=: 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # read -r var val 00:26:29.198 ************************************ 00:26:29.198 END TEST accel_dif_generate 00:26:29.198 ************************************ 00:26:29.198 19:19:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:29.198 19:19:44 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:26:29.198 19:19:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:29.198 00:26:29.198 real 0m2.857s 00:26:29.198 user 0m2.599s 00:26:29.198 sys 0m0.193s 00:26:29.198 19:19:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:29.198 19:19:44 -- common/autotest_common.sh@10 -- # set +x 00:26:29.198 19:19:44 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:26:29.198 19:19:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:29.198 19:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:29.198 19:19:44 -- common/autotest_common.sh@10 -- # set +x 00:26:29.198 ************************************ 00:26:29.198 START TEST accel_dif_generate_copy 00:26:29.198 ************************************ 00:26:29.198 19:19:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:26:29.198 19:19:44 -- accel/accel.sh@16 -- # local accel_opc 00:26:29.198 19:19:44 -- accel/accel.sh@17 -- # local accel_module 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # IFS=: 00:26:29.198 19:19:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:26:29.198 19:19:44 -- accel/accel.sh@19 -- # read -r var val 00:26:29.198 19:19:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:26:29.198 19:19:44 -- accel/accel.sh@12 -- # build_accel_config 00:26:29.198 19:19:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:29.198 19:19:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:29.198 19:19:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:29.198 19:19:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:29.198 19:19:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:29.198 19:19:44 -- accel/accel.sh@40 -- # local IFS=, 00:26:29.198 19:19:44 -- accel/accel.sh@41 -- # jq -r . 00:26:29.198 [2024-04-18 19:19:44.805106] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:29.198 [2024-04-18 19:19:44.805471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115634 ] 00:26:29.198 [2024-04-18 19:19:44.972605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.455 [2024-04-18 19:19:45.204008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val= 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val= 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val=0x1 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val= 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val= 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val= 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val=software 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@22 -- # accel_module=software 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val=32 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val=32 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val=1 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val=No 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val= 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:29.714 19:19:45 -- accel/accel.sh@20 -- # val= 00:26:29.714 19:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # IFS=: 00:26:29.714 19:19:45 -- accel/accel.sh@19 -- # read -r var val 00:26:31.612 19:19:47 -- accel/accel.sh@20 -- # val= 00:26:31.612 19:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # IFS=: 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # read -r var val 00:26:31.612 19:19:47 -- accel/accel.sh@20 -- # val= 00:26:31.612 19:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # IFS=: 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # read -r var val 00:26:31.612 19:19:47 -- accel/accel.sh@20 -- # val= 00:26:31.612 19:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # IFS=: 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # read -r var val 00:26:31.612 19:19:47 -- accel/accel.sh@20 -- # val= 00:26:31.612 19:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # IFS=: 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # read -r var val 00:26:31.612 19:19:47 -- accel/accel.sh@20 -- # val= 00:26:31.612 19:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # IFS=: 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # read -r var val 00:26:31.612 19:19:47 -- accel/accel.sh@20 -- # val= 00:26:31.612 19:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # IFS=: 00:26:31.612 19:19:47 -- accel/accel.sh@19 -- # read -r var val 00:26:31.612 ************************************ 00:26:31.612 END TEST accel_dif_generate_copy 00:26:31.612 ************************************ 00:26:31.612 19:19:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:31.612 19:19:47 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:26:31.612 19:19:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:31.612 00:26:31.612 real 0m2.761s 00:26:31.612 user 0m2.518s 00:26:31.612 sys 0m0.162s 00:26:31.612 19:19:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:31.612 19:19:47 -- common/autotest_common.sh@10 -- # set +x 00:26:31.870 19:19:47 -- accel/accel.sh@115 -- # [[ y == y ]] 00:26:31.870 19:19:47 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:31.870 19:19:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:26:31.870 19:19:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:31.870 19:19:47 -- common/autotest_common.sh@10 -- # set +x 00:26:31.870 ************************************ 00:26:31.870 START TEST accel_comp 00:26:31.870 ************************************ 00:26:31.870 19:19:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:31.870 19:19:47 -- accel/accel.sh@16 -- # local accel_opc 00:26:31.870 19:19:47 -- accel/accel.sh@17 -- # local accel_module 00:26:31.870 19:19:47 -- accel/accel.sh@19 -- # IFS=: 00:26:31.870 19:19:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:31.870 19:19:47 -- accel/accel.sh@19 -- # read -r var val 00:26:31.870 19:19:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:31.870 19:19:47 -- accel/accel.sh@12 -- # build_accel_config 00:26:31.870 19:19:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:31.870 19:19:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:31.870 19:19:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:31.870 19:19:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:31.870 19:19:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:31.870 19:19:47 -- accel/accel.sh@40 -- # local IFS=, 00:26:31.870 19:19:47 -- accel/accel.sh@41 -- # jq -r . 00:26:31.870 [2024-04-18 19:19:47.674705] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:31.870 [2024-04-18 19:19:47.675156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115697 ] 00:26:32.129 [2024-04-18 19:19:47.854732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.411 [2024-04-18 19:19:48.084674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val= 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val= 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val= 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val=0x1 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val= 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val= 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val=compress 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@23 -- # accel_opc=compress 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val= 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val=software 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@22 -- # accel_module=software 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val=32 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val=32 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val=1 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val=No 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val= 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:32.669 19:19:48 -- accel/accel.sh@20 -- # val= 00:26:32.669 19:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # IFS=: 00:26:32.669 19:19:48 -- accel/accel.sh@19 -- # read -r var val 00:26:34.572 19:19:50 -- accel/accel.sh@20 -- # val= 00:26:34.572 19:19:50 -- accel/accel.sh@21 -- # case "$var" in 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # IFS=: 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # read -r var val 00:26:34.572 19:19:50 -- accel/accel.sh@20 -- # val= 00:26:34.572 19:19:50 -- accel/accel.sh@21 -- # case "$var" in 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # IFS=: 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # read -r var val 00:26:34.572 19:19:50 -- accel/accel.sh@20 -- # val= 00:26:34.572 19:19:50 -- accel/accel.sh@21 -- # case "$var" in 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # IFS=: 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # read -r var val 00:26:34.572 19:19:50 -- accel/accel.sh@20 -- # val= 00:26:34.572 19:19:50 -- accel/accel.sh@21 -- # case "$var" in 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # IFS=: 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # read -r var val 00:26:34.572 19:19:50 -- accel/accel.sh@20 -- # val= 00:26:34.572 19:19:50 -- accel/accel.sh@21 -- # case "$var" in 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # IFS=: 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # read -r var val 00:26:34.572 19:19:50 -- accel/accel.sh@20 -- # val= 00:26:34.572 19:19:50 -- accel/accel.sh@21 -- # case "$var" in 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # IFS=: 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # read -r var val 00:26:34.572 ************************************ 00:26:34.572 END TEST accel_comp 00:26:34.572 ************************************ 00:26:34.572 19:19:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:34.572 19:19:50 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:26:34.572 19:19:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:34.572 00:26:34.572 real 0m2.781s 00:26:34.572 user 0m2.488s 00:26:34.572 sys 0m0.216s 00:26:34.572 19:19:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:34.572 19:19:50 -- common/autotest_common.sh@10 -- # set +x 00:26:34.572 19:19:50 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:26:34.572 19:19:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:26:34.572 19:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:34.572 19:19:50 -- common/autotest_common.sh@10 -- # set +x 00:26:34.572 ************************************ 00:26:34.572 START TEST accel_decomp 00:26:34.572 ************************************ 00:26:34.572 19:19:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:26:34.572 19:19:50 -- accel/accel.sh@16 -- # local accel_opc 00:26:34.572 19:19:50 -- accel/accel.sh@17 -- # local accel_module 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # IFS=: 00:26:34.572 19:19:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:26:34.572 19:19:50 -- accel/accel.sh@19 -- # read -r var val 00:26:34.572 19:19:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:26:34.572 19:19:50 -- accel/accel.sh@12 -- # build_accel_config 00:26:34.572 19:19:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:34.572 19:19:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:34.572 19:19:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:34.572 19:19:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:34.572 19:19:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:34.572 19:19:50 -- accel/accel.sh@40 -- # local IFS=, 00:26:34.572 19:19:50 -- accel/accel.sh@41 -- # jq -r . 00:26:34.831 [2024-04-18 19:19:50.544107] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:34.831 [2024-04-18 19:19:50.544528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115763 ] 00:26:34.831 [2024-04-18 19:19:50.721784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.089 [2024-04-18 19:19:50.959961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.348 19:19:51 -- accel/accel.sh@20 -- # val= 00:26:35.348 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.348 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.348 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.348 19:19:51 -- accel/accel.sh@20 -- # val= 00:26:35.348 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.348 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.348 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.348 19:19:51 -- accel/accel.sh@20 -- # val= 00:26:35.348 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.348 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.348 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val=0x1 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val= 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val= 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val=decompress 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val= 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val=software 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@22 -- # accel_module=software 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val=32 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val=32 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val=1 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val=Yes 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val= 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:35.349 19:19:51 -- accel/accel.sh@20 -- # val= 00:26:35.349 19:19:51 -- accel/accel.sh@21 -- # case "$var" in 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # IFS=: 00:26:35.349 19:19:51 -- accel/accel.sh@19 -- # read -r var val 00:26:37.879 19:19:53 -- accel/accel.sh@20 -- # val= 00:26:37.879 19:19:53 -- accel/accel.sh@21 -- # case "$var" in 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # IFS=: 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # read -r var val 00:26:37.879 19:19:53 -- accel/accel.sh@20 -- # val= 00:26:37.879 19:19:53 -- accel/accel.sh@21 -- # case "$var" in 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # IFS=: 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # read -r var val 00:26:37.879 19:19:53 -- accel/accel.sh@20 -- # val= 00:26:37.879 19:19:53 -- accel/accel.sh@21 -- # case "$var" in 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # IFS=: 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # read -r var val 00:26:37.879 19:19:53 -- accel/accel.sh@20 -- # val= 00:26:37.879 19:19:53 -- accel/accel.sh@21 -- # case "$var" in 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # IFS=: 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # read -r var val 00:26:37.879 19:19:53 -- accel/accel.sh@20 -- # val= 00:26:37.879 19:19:53 -- accel/accel.sh@21 -- # case "$var" in 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # IFS=: 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # read -r var val 00:26:37.879 19:19:53 -- accel/accel.sh@20 -- # val= 00:26:37.879 19:19:53 -- accel/accel.sh@21 -- # case "$var" in 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # IFS=: 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # read -r var val 00:26:37.879 ************************************ 00:26:37.879 END TEST accel_decomp 00:26:37.879 ************************************ 00:26:37.879 19:19:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:37.879 19:19:53 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:26:37.879 19:19:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:37.879 00:26:37.879 real 0m2.841s 00:26:37.879 user 0m2.534s 00:26:37.879 sys 0m0.213s 00:26:37.879 19:19:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:37.879 19:19:53 -- common/autotest_common.sh@10 -- # set +x 00:26:37.879 19:19:53 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:26:37.879 19:19:53 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:26:37.879 19:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:37.879 19:19:53 -- common/autotest_common.sh@10 -- # set +x 00:26:37.879 ************************************ 00:26:37.879 START TEST accel_decmop_full 00:26:37.879 ************************************ 00:26:37.879 19:19:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:26:37.879 19:19:53 -- accel/accel.sh@16 -- # local accel_opc 00:26:37.879 19:19:53 -- accel/accel.sh@17 -- # local accel_module 00:26:37.879 19:19:53 -- accel/accel.sh@19 -- # IFS=: 00:26:37.879 19:19:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:26:37.880 19:19:53 -- accel/accel.sh@19 -- # read -r var val 00:26:37.880 19:19:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:26:37.880 19:19:53 -- accel/accel.sh@12 -- # build_accel_config 00:26:37.880 19:19:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:37.880 19:19:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:37.880 19:19:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:37.880 19:19:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:37.880 19:19:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:37.880 19:19:53 -- accel/accel.sh@40 -- # local IFS=, 00:26:37.880 19:19:53 -- accel/accel.sh@41 -- # jq -r . 00:26:37.880 [2024-04-18 19:19:53.455333] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:37.880 [2024-04-18 19:19:53.455703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115825 ] 00:26:37.880 [2024-04-18 19:19:53.625016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.137 [2024-04-18 19:19:53.904211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val= 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val= 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val= 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val=0x1 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val= 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val= 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val=decompress 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val='111250 bytes' 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val= 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val=software 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@22 -- # accel_module=software 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val=32 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val=32 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val=1 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val=Yes 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val= 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:38.396 19:19:54 -- accel/accel.sh@20 -- # val= 00:26:38.396 19:19:54 -- accel/accel.sh@21 -- # case "$var" in 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # IFS=: 00:26:38.396 19:19:54 -- accel/accel.sh@19 -- # read -r var val 00:26:40.354 19:19:56 -- accel/accel.sh@20 -- # val= 00:26:40.354 19:19:56 -- accel/accel.sh@21 -- # case "$var" in 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # IFS=: 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # read -r var val 00:26:40.354 19:19:56 -- accel/accel.sh@20 -- # val= 00:26:40.354 19:19:56 -- accel/accel.sh@21 -- # case "$var" in 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # IFS=: 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # read -r var val 00:26:40.354 19:19:56 -- accel/accel.sh@20 -- # val= 00:26:40.354 19:19:56 -- accel/accel.sh@21 -- # case "$var" in 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # IFS=: 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # read -r var val 00:26:40.354 19:19:56 -- accel/accel.sh@20 -- # val= 00:26:40.354 19:19:56 -- accel/accel.sh@21 -- # case "$var" in 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # IFS=: 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # read -r var val 00:26:40.354 19:19:56 -- accel/accel.sh@20 -- # val= 00:26:40.354 19:19:56 -- accel/accel.sh@21 -- # case "$var" in 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # IFS=: 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # read -r var val 00:26:40.354 19:19:56 -- accel/accel.sh@20 -- # val= 00:26:40.354 19:19:56 -- accel/accel.sh@21 -- # case "$var" in 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # IFS=: 00:26:40.354 19:19:56 -- accel/accel.sh@19 -- # read -r var val 00:26:40.612 19:19:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:40.612 19:19:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:26:40.612 ************************************ 00:26:40.612 END TEST accel_decmop_full 00:26:40.612 ************************************ 00:26:40.612 19:19:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:40.612 00:26:40.612 real 0m2.876s 00:26:40.612 user 0m2.592s 00:26:40.612 sys 0m0.175s 00:26:40.612 19:19:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:40.612 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.612 19:19:56 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:26:40.612 19:19:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:26:40.612 19:19:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:40.612 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.612 ************************************ 00:26:40.612 START TEST accel_decomp_mcore 00:26:40.612 ************************************ 00:26:40.612 19:19:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:26:40.612 19:19:56 -- accel/accel.sh@16 -- # local accel_opc 00:26:40.612 19:19:56 -- accel/accel.sh@17 -- # local accel_module 00:26:40.612 19:19:56 -- accel/accel.sh@19 -- # IFS=: 00:26:40.612 19:19:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:26:40.612 19:19:56 -- accel/accel.sh@19 -- # read -r var val 00:26:40.612 19:19:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:26:40.612 19:19:56 -- accel/accel.sh@12 -- # build_accel_config 00:26:40.612 19:19:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:40.612 19:19:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:40.612 19:19:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:40.612 19:19:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:40.612 19:19:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:40.612 19:19:56 -- accel/accel.sh@40 -- # local IFS=, 00:26:40.612 19:19:56 -- accel/accel.sh@41 -- # jq -r . 00:26:40.612 [2024-04-18 19:19:56.419746] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:40.612 [2024-04-18 19:19:56.420206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115903 ] 00:26:40.870 [2024-04-18 19:19:56.611328] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.127 [2024-04-18 19:19:56.877919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.127 [2024-04-18 19:19:56.878064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.127 [2024-04-18 19:19:56.877993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.127 [2024-04-18 19:19:56.878069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.384 19:19:57 -- accel/accel.sh@20 -- # val= 00:26:41.384 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.384 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.384 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.384 19:19:57 -- accel/accel.sh@20 -- # val= 00:26:41.384 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.384 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.384 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.384 19:19:57 -- accel/accel.sh@20 -- # val= 00:26:41.384 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.384 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.384 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.384 19:19:57 -- accel/accel.sh@20 -- # val=0xf 00:26:41.384 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.384 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val= 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val= 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val=decompress 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@23 -- # accel_opc=decompress 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val= 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val=software 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@22 -- # accel_module=software 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val=32 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val=32 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val=1 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val=Yes 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val= 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:41.385 19:19:57 -- accel/accel.sh@20 -- # val= 00:26:41.385 19:19:57 -- accel/accel.sh@21 -- # case "$var" in 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # IFS=: 00:26:41.385 19:19:57 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@20 -- # val= 00:26:43.916 19:19:59 -- accel/accel.sh@21 -- # case "$var" in 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@20 -- # val= 00:26:43.916 19:19:59 -- accel/accel.sh@21 -- # case "$var" in 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@20 -- # val= 00:26:43.916 19:19:59 -- accel/accel.sh@21 -- # case "$var" in 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@20 -- # val= 00:26:43.916 19:19:59 -- accel/accel.sh@21 -- # case "$var" in 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@20 -- # val= 00:26:43.916 19:19:59 -- accel/accel.sh@21 -- # case "$var" in 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@20 -- # val= 00:26:43.916 19:19:59 -- accel/accel.sh@21 -- # case "$var" in 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@20 -- # val= 00:26:43.916 19:19:59 -- accel/accel.sh@21 -- # case "$var" in 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@20 -- # val= 00:26:43.916 19:19:59 -- accel/accel.sh@21 -- # case "$var" in 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@20 -- # val= 00:26:43.916 19:19:59 -- accel/accel.sh@21 -- # case "$var" in 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 ************************************ 00:26:43.916 END TEST accel_decomp_mcore 00:26:43.916 ************************************ 00:26:43.916 19:19:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:43.916 19:19:59 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:26:43.916 19:19:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:43.916 00:26:43.916 real 0m2.929s 00:26:43.916 user 0m8.403s 00:26:43.916 sys 0m0.211s 00:26:43.916 19:19:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:43.916 19:19:59 -- common/autotest_common.sh@10 -- # set +x 00:26:43.916 19:19:59 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:26:43.916 19:19:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:26:43.916 19:19:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:43.916 19:19:59 -- common/autotest_common.sh@10 -- # set +x 00:26:43.916 ************************************ 00:26:43.916 START TEST accel_decomp_full_mcore 00:26:43.916 ************************************ 00:26:43.916 19:19:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:26:43.916 19:19:59 -- accel/accel.sh@16 -- # local accel_opc 00:26:43.916 19:19:59 -- accel/accel.sh@17 -- # local accel_module 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # IFS=: 00:26:43.916 19:19:59 -- accel/accel.sh@19 -- # read -r var val 00:26:43.916 19:19:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:26:43.916 19:19:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:26:43.916 19:19:59 -- accel/accel.sh@12 -- # build_accel_config 00:26:43.916 19:19:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:43.916 19:19:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:43.916 19:19:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:43.916 19:19:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:43.916 19:19:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:43.916 19:19:59 -- accel/accel.sh@40 -- # local IFS=, 00:26:43.916 19:19:59 -- accel/accel.sh@41 -- # jq -r . 00:26:43.916 [2024-04-18 19:19:59.431176] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:43.916 [2024-04-18 19:19:59.431719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115973 ] 00:26:43.916 [2024-04-18 19:19:59.614492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.173 [2024-04-18 19:19:59.859861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.173 [2024-04-18 19:19:59.860084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.173 [2024-04-18 19:19:59.859974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.173 [2024-04-18 19:19:59.860087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val= 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val= 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val= 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val=0xf 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val= 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val= 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val=decompress 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@23 -- # accel_opc=decompress 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val='111250 bytes' 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val= 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val=software 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@22 -- # accel_module=software 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val=32 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val=32 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val=1 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val=Yes 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val= 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:44.431 19:20:00 -- accel/accel.sh@20 -- # val= 00:26:44.431 19:20:00 -- accel/accel.sh@21 -- # case "$var" in 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # IFS=: 00:26:44.431 19:20:00 -- accel/accel.sh@19 -- # read -r var val 00:26:46.959 19:20:02 -- accel/accel.sh@20 -- # val= 00:26:46.959 19:20:02 -- accel/accel.sh@21 -- # case "$var" in 00:26:46.959 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.959 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.959 19:20:02 -- accel/accel.sh@20 -- # val= 00:26:46.959 19:20:02 -- accel/accel.sh@21 -- # case "$var" in 00:26:46.959 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.959 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.959 19:20:02 -- accel/accel.sh@20 -- # val= 00:26:46.959 19:20:02 -- accel/accel.sh@21 -- # case "$var" in 00:26:46.959 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.960 19:20:02 -- accel/accel.sh@20 -- # val= 00:26:46.960 19:20:02 -- accel/accel.sh@21 -- # case "$var" in 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.960 19:20:02 -- accel/accel.sh@20 -- # val= 00:26:46.960 19:20:02 -- accel/accel.sh@21 -- # case "$var" in 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.960 19:20:02 -- accel/accel.sh@20 -- # val= 00:26:46.960 19:20:02 -- accel/accel.sh@21 -- # case "$var" in 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.960 19:20:02 -- accel/accel.sh@20 -- # val= 00:26:46.960 19:20:02 -- accel/accel.sh@21 -- # case "$var" in 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.960 19:20:02 -- accel/accel.sh@20 -- # val= 00:26:46.960 19:20:02 -- accel/accel.sh@21 -- # case "$var" in 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.960 19:20:02 -- accel/accel.sh@20 -- # val= 00:26:46.960 19:20:02 -- accel/accel.sh@21 -- # case "$var" in 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.960 ************************************ 00:26:46.960 END TEST accel_decomp_full_mcore 00:26:46.960 ************************************ 00:26:46.960 19:20:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:46.960 19:20:02 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:26:46.960 19:20:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:46.960 00:26:46.960 real 0m2.920s 00:26:46.960 user 0m8.505s 00:26:46.960 sys 0m0.219s 00:26:46.960 19:20:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:46.960 19:20:02 -- common/autotest_common.sh@10 -- # set +x 00:26:46.960 19:20:02 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:26:46.960 19:20:02 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:26:46.960 19:20:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:46.960 19:20:02 -- common/autotest_common.sh@10 -- # set +x 00:26:46.960 ************************************ 00:26:46.960 START TEST accel_decomp_mthread 00:26:46.960 ************************************ 00:26:46.960 19:20:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:26:46.960 19:20:02 -- accel/accel.sh@16 -- # local accel_opc 00:26:46.960 19:20:02 -- accel/accel.sh@17 -- # local accel_module 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # IFS=: 00:26:46.960 19:20:02 -- accel/accel.sh@19 -- # read -r var val 00:26:46.960 19:20:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:26:46.960 19:20:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:26:46.960 19:20:02 -- accel/accel.sh@12 -- # build_accel_config 00:26:46.960 19:20:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:46.960 19:20:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:46.960 19:20:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:46.960 19:20:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:46.960 19:20:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:46.960 19:20:02 -- accel/accel.sh@40 -- # local IFS=, 00:26:46.960 19:20:02 -- accel/accel.sh@41 -- # jq -r . 00:26:46.960 [2024-04-18 19:20:02.443812] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:46.960 [2024-04-18 19:20:02.444031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116038 ] 00:26:46.960 [2024-04-18 19:20:02.620310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.960 [2024-04-18 19:20:02.862733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val= 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val= 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val= 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val=0x1 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val= 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val= 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val=decompress 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@23 -- # accel_opc=decompress 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val= 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val=software 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@22 -- # accel_module=software 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val=32 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val=32 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val=2 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val=Yes 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val= 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:47.323 19:20:03 -- accel/accel.sh@20 -- # val= 00:26:47.323 19:20:03 -- accel/accel.sh@21 -- # case "$var" in 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # IFS=: 00:26:47.323 19:20:03 -- accel/accel.sh@19 -- # read -r var val 00:26:49.852 19:20:05 -- accel/accel.sh@20 -- # val= 00:26:49.852 19:20:05 -- accel/accel.sh@21 -- # case "$var" in 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # IFS=: 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # read -r var val 00:26:49.852 19:20:05 -- accel/accel.sh@20 -- # val= 00:26:49.852 19:20:05 -- accel/accel.sh@21 -- # case "$var" in 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # IFS=: 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # read -r var val 00:26:49.852 19:20:05 -- accel/accel.sh@20 -- # val= 00:26:49.852 19:20:05 -- accel/accel.sh@21 -- # case "$var" in 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # IFS=: 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # read -r var val 00:26:49.852 19:20:05 -- accel/accel.sh@20 -- # val= 00:26:49.852 19:20:05 -- accel/accel.sh@21 -- # case "$var" in 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # IFS=: 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # read -r var val 00:26:49.852 19:20:05 -- accel/accel.sh@20 -- # val= 00:26:49.852 19:20:05 -- accel/accel.sh@21 -- # case "$var" in 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # IFS=: 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # read -r var val 00:26:49.852 19:20:05 -- accel/accel.sh@20 -- # val= 00:26:49.852 19:20:05 -- accel/accel.sh@21 -- # case "$var" in 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # IFS=: 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # read -r var val 00:26:49.852 19:20:05 -- accel/accel.sh@20 -- # val= 00:26:49.852 19:20:05 -- accel/accel.sh@21 -- # case "$var" in 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # IFS=: 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # read -r var val 00:26:49.852 19:20:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:49.852 19:20:05 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:26:49.852 19:20:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:49.852 00:26:49.852 real 0m2.835s 00:26:49.852 user 0m2.549s 00:26:49.852 sys 0m0.212s 00:26:49.852 19:20:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:49.852 19:20:05 -- common/autotest_common.sh@10 -- # set +x 00:26:49.852 ************************************ 00:26:49.852 END TEST accel_decomp_mthread 00:26:49.852 ************************************ 00:26:49.852 19:20:05 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:26:49.852 19:20:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:26:49.852 19:20:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:49.852 19:20:05 -- common/autotest_common.sh@10 -- # set +x 00:26:49.852 ************************************ 00:26:49.852 START TEST accel_deomp_full_mthread 00:26:49.852 ************************************ 00:26:49.852 19:20:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:26:49.852 19:20:05 -- accel/accel.sh@16 -- # local accel_opc 00:26:49.852 19:20:05 -- accel/accel.sh@17 -- # local accel_module 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # IFS=: 00:26:49.852 19:20:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:26:49.852 19:20:05 -- accel/accel.sh@19 -- # read -r var val 00:26:49.852 19:20:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:26:49.852 19:20:05 -- accel/accel.sh@12 -- # build_accel_config 00:26:49.852 19:20:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:49.852 19:20:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:49.852 19:20:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:49.852 19:20:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:49.852 19:20:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:49.852 19:20:05 -- accel/accel.sh@40 -- # local IFS=, 00:26:49.852 19:20:05 -- accel/accel.sh@41 -- # jq -r . 00:26:49.852 [2024-04-18 19:20:05.367759] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:49.852 [2024-04-18 19:20:05.367989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116123 ] 00:26:49.852 [2024-04-18 19:20:05.540477] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.852 [2024-04-18 19:20:05.780001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val= 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val= 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val= 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val=0x1 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val= 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val= 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val=decompress 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@23 -- # accel_opc=decompress 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val='111250 bytes' 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val= 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val=software 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@22 -- # accel_module=software 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val=32 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val=32 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val=2 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val=Yes 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val= 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:50.429 19:20:06 -- accel/accel.sh@20 -- # val= 00:26:50.429 19:20:06 -- accel/accel.sh@21 -- # case "$var" in 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # IFS=: 00:26:50.429 19:20:06 -- accel/accel.sh@19 -- # read -r var val 00:26:52.329 19:20:08 -- accel/accel.sh@20 -- # val= 00:26:52.329 19:20:08 -- accel/accel.sh@21 -- # case "$var" in 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # IFS=: 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # read -r var val 00:26:52.329 19:20:08 -- accel/accel.sh@20 -- # val= 00:26:52.329 19:20:08 -- accel/accel.sh@21 -- # case "$var" in 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # IFS=: 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # read -r var val 00:26:52.329 19:20:08 -- accel/accel.sh@20 -- # val= 00:26:52.329 19:20:08 -- accel/accel.sh@21 -- # case "$var" in 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # IFS=: 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # read -r var val 00:26:52.329 19:20:08 -- accel/accel.sh@20 -- # val= 00:26:52.329 19:20:08 -- accel/accel.sh@21 -- # case "$var" in 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # IFS=: 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # read -r var val 00:26:52.329 19:20:08 -- accel/accel.sh@20 -- # val= 00:26:52.329 19:20:08 -- accel/accel.sh@21 -- # case "$var" in 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # IFS=: 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # read -r var val 00:26:52.329 19:20:08 -- accel/accel.sh@20 -- # val= 00:26:52.329 19:20:08 -- accel/accel.sh@21 -- # case "$var" in 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # IFS=: 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # read -r var val 00:26:52.329 19:20:08 -- accel/accel.sh@20 -- # val= 00:26:52.329 19:20:08 -- accel/accel.sh@21 -- # case "$var" in 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # IFS=: 00:26:52.329 19:20:08 -- accel/accel.sh@19 -- # read -r var val 00:26:52.329 ************************************ 00:26:52.329 END TEST accel_deomp_full_mthread 00:26:52.329 ************************************ 00:26:52.329 19:20:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:26:52.329 19:20:08 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:26:52.329 19:20:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:52.329 00:26:52.329 real 0m2.916s 00:26:52.329 user 0m2.665s 00:26:52.329 sys 0m0.175s 00:26:52.329 19:20:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:52.329 19:20:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.586 19:20:08 -- accel/accel.sh@124 -- # [[ n == y ]] 00:26:52.586 19:20:08 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:26:52.586 19:20:08 -- accel/accel.sh@137 -- # build_accel_config 00:26:52.586 19:20:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:26:52.586 19:20:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:26:52.586 19:20:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:52.586 19:20:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:52.586 19:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:52.586 19:20:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:52.586 19:20:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:26:52.586 19:20:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.586 19:20:08 -- accel/accel.sh@40 -- # local IFS=, 00:26:52.586 19:20:08 -- accel/accel.sh@41 -- # jq -r . 00:26:52.586 ************************************ 00:26:52.586 START TEST accel_dif_functional_tests 00:26:52.586 ************************************ 00:26:52.586 19:20:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:26:52.586 [2024-04-18 19:20:08.391319] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:52.586 [2024-04-18 19:20:08.391506] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116185 ] 00:26:52.844 [2024-04-18 19:20:08.566909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:53.102 [2024-04-18 19:20:08.802916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.102 [2024-04-18 19:20:08.803086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.102 [2024-04-18 19:20:08.803088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.361 00:26:53.361 00:26:53.361 CUnit - A unit testing framework for C - Version 2.1-3 00:26:53.361 http://cunit.sourceforge.net/ 00:26:53.361 00:26:53.361 00:26:53.361 Suite: accel_dif 00:26:53.361 Test: verify: DIF generated, GUARD check ...passed 00:26:53.361 Test: verify: DIF generated, APPTAG check ...passed 00:26:53.361 Test: verify: DIF generated, REFTAG check ...passed 00:26:53.361 Test: verify: DIF not generated, GUARD check ...[2024-04-18 19:20:09.171547] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:26:53.361 passed 00:26:53.361 Test: verify: DIF not generated, APPTAG check ...passed 00:26:53.361 Test: verify: DIF not generated, REFTAG check ...passed 00:26:53.361 Test: verify: APPTAG correct, APPTAG check ...passed 00:26:53.361 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:26:53.361 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:26:53.361 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:26:53.361 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:26:53.361 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:26:53.361 Test: generate copy: DIF generated, GUARD check ...passed 00:26:53.362 Test: generate copy: DIF generated, APTTAG check ...passed 00:26:53.362 Test: generate copy: DIF generated, REFTAG check ...passed 00:26:53.362 Test: generate copy: DIF generated, no GUARD check flag set ...passed[2024-04-18 19:20:09.171682] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:26:53.362 [2024-04-18 19:20:09.171784] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:26:53.362 [2024-04-18 19:20:09.171830] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:26:53.362 [2024-04-18 19:20:09.171892] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:26:53.362 [2024-04-18 19:20:09.171952] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:26:53.362 [2024-04-18 19:20:09.172090] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:26:53.362 [2024-04-18 19:20:09.172326] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:26:53.362 00:26:53.362 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:26:53.362 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:26:53.362 Test: generate copy: iovecs-len validate ...[2024-04-18 19:20:09.172755] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:26:53.362 passed 00:26:53.362 Test: generate copy: buffer alignment validate ...passed 00:26:53.362 00:26:53.362 Run Summary: Type Total Ran Passed Failed Inactive 00:26:53.362 suites 1 1 n/a 0 0 00:26:53.362 tests 20 20 20 0 0 00:26:53.362 asserts 204 204 204 0 n/a 00:26:53.362 00:26:53.362 Elapsed time = 0.009 seconds 00:26:54.735 ************************************ 00:26:54.735 END TEST accel_dif_functional_tests 00:26:54.735 ************************************ 00:26:54.735 00:26:54.735 real 0m2.308s 00:26:54.735 user 0m4.585s 00:26:54.735 sys 0m0.256s 00:26:54.735 19:20:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:54.735 19:20:10 -- common/autotest_common.sh@10 -- # set +x 00:26:54.735 00:26:54.735 real 1m9.845s 00:26:54.735 user 1m16.743s 00:26:54.735 sys 0m6.291s 00:26:54.735 19:20:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:54.735 ************************************ 00:26:54.735 19:20:10 -- common/autotest_common.sh@10 -- # set +x 00:26:54.735 END TEST accel 00:26:54.735 ************************************ 00:26:54.993 19:20:10 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:26:54.993 19:20:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:54.993 19:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:54.993 19:20:10 -- common/autotest_common.sh@10 -- # set +x 00:26:54.993 ************************************ 00:26:54.993 START TEST accel_rpc 00:26:54.993 ************************************ 00:26:54.993 19:20:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:26:54.993 * Looking for test storage... 00:26:54.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:26:54.993 19:20:10 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:26:54.993 19:20:10 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=116292 00:26:54.993 19:20:10 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:26:54.993 19:20:10 -- accel/accel_rpc.sh@15 -- # waitforlisten 116292 00:26:54.994 19:20:10 -- common/autotest_common.sh@817 -- # '[' -z 116292 ']' 00:26:54.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.994 19:20:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.994 19:20:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:54.994 19:20:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.994 19:20:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:54.994 19:20:10 -- common/autotest_common.sh@10 -- # set +x 00:26:54.994 [2024-04-18 19:20:10.907548] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:26:54.994 [2024-04-18 19:20:10.907709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116292 ] 00:26:55.251 [2024-04-18 19:20:11.073618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.509 [2024-04-18 19:20:11.316030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.075 19:20:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:56.075 19:20:11 -- common/autotest_common.sh@850 -- # return 0 00:26:56.075 19:20:11 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:26:56.075 19:20:11 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:26:56.075 19:20:11 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:26:56.075 19:20:11 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:26:56.075 19:20:11 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:26:56.075 19:20:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:56.075 19:20:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:56.075 19:20:11 -- common/autotest_common.sh@10 -- # set +x 00:26:56.075 ************************************ 00:26:56.075 START TEST accel_assign_opcode 00:26:56.075 ************************************ 00:26:56.075 19:20:11 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:26:56.075 19:20:11 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:26:56.075 19:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.075 19:20:11 -- common/autotest_common.sh@10 -- # set +x 00:26:56.075 [2024-04-18 19:20:11.844997] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:26:56.075 19:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.075 19:20:11 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:26:56.075 19:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.075 19:20:11 -- common/autotest_common.sh@10 -- # set +x 00:26:56.075 [2024-04-18 19:20:11.852901] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:26:56.075 19:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.075 19:20:11 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:26:56.075 19:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.075 19:20:11 -- common/autotest_common.sh@10 -- # set +x 00:26:57.444 19:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.444 19:20:13 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:26:57.444 19:20:13 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:26:57.444 19:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.444 19:20:13 -- common/autotest_common.sh@10 -- # set +x 00:26:57.444 19:20:13 -- accel/accel_rpc.sh@42 -- # grep software 00:26:57.444 19:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.444 software 00:26:57.444 00:26:57.444 real 0m1.267s 00:26:57.444 user 0m0.059s 00:26:57.444 sys 0m0.005s 00:26:57.444 19:20:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:57.444 ************************************ 00:26:57.444 19:20:13 -- common/autotest_common.sh@10 -- # set +x 00:26:57.444 END TEST accel_assign_opcode 00:26:57.444 ************************************ 00:26:57.444 19:20:13 -- accel/accel_rpc.sh@55 -- # killprocess 116292 00:26:57.444 19:20:13 -- common/autotest_common.sh@936 -- # '[' -z 116292 ']' 00:26:57.444 19:20:13 -- common/autotest_common.sh@940 -- # kill -0 116292 00:26:57.444 19:20:13 -- common/autotest_common.sh@941 -- # uname 00:26:57.444 19:20:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:57.445 19:20:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116292 00:26:57.445 19:20:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:57.445 killing process with pid 116292 00:26:57.445 19:20:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:57.445 19:20:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116292' 00:26:57.445 19:20:13 -- common/autotest_common.sh@955 -- # kill 116292 00:26:57.445 19:20:13 -- common/autotest_common.sh@960 -- # wait 116292 00:27:00.725 00:27:00.725 real 0m5.492s 00:27:00.725 user 0m5.336s 00:27:00.725 sys 0m0.656s 00:27:00.725 19:20:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:00.725 ************************************ 00:27:00.725 END TEST accel_rpc 00:27:00.725 ************************************ 00:27:00.725 19:20:16 -- common/autotest_common.sh@10 -- # set +x 00:27:00.725 19:20:16 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:27:00.725 19:20:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:00.725 19:20:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:00.725 19:20:16 -- common/autotest_common.sh@10 -- # set +x 00:27:00.725 ************************************ 00:27:00.725 START TEST app_cmdline 00:27:00.725 ************************************ 00:27:00.725 19:20:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:27:00.725 * Looking for test storage... 00:27:00.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:27:00.725 19:20:16 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:27:00.725 19:20:16 -- app/cmdline.sh@17 -- # spdk_tgt_pid=116462 00:27:00.725 19:20:16 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:27:00.725 19:20:16 -- app/cmdline.sh@18 -- # waitforlisten 116462 00:27:00.725 19:20:16 -- common/autotest_common.sh@817 -- # '[' -z 116462 ']' 00:27:00.725 19:20:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.725 19:20:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:00.725 19:20:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.725 19:20:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:00.725 19:20:16 -- common/autotest_common.sh@10 -- # set +x 00:27:00.725 [2024-04-18 19:20:16.498566] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:27:00.725 [2024-04-18 19:20:16.499141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116462 ] 00:27:00.984 [2024-04-18 19:20:16.666625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.243 [2024-04-18 19:20:17.000915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.626 19:20:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:02.626 19:20:18 -- common/autotest_common.sh@850 -- # return 0 00:27:02.626 19:20:18 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:27:02.626 { 00:27:02.626 "version": "SPDK v24.05-pre git sha1 99b3305a5", 00:27:02.626 "fields": { 00:27:02.626 "major": 24, 00:27:02.626 "minor": 5, 00:27:02.626 "patch": 0, 00:27:02.626 "suffix": "-pre", 00:27:02.626 "commit": "99b3305a5" 00:27:02.626 } 00:27:02.626 } 00:27:02.626 19:20:18 -- app/cmdline.sh@22 -- # expected_methods=() 00:27:02.626 19:20:18 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:27:02.626 19:20:18 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:27:02.626 19:20:18 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:27:02.626 19:20:18 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:27:02.626 19:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.626 19:20:18 -- common/autotest_common.sh@10 -- # set +x 00:27:02.626 19:20:18 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:27:02.626 19:20:18 -- app/cmdline.sh@26 -- # sort 00:27:02.626 19:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.626 19:20:18 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:27:02.626 19:20:18 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:27:02.626 19:20:18 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:02.626 19:20:18 -- common/autotest_common.sh@638 -- # local es=0 00:27:02.626 19:20:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:02.626 19:20:18 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.626 19:20:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:02.626 19:20:18 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.626 19:20:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:02.626 19:20:18 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.626 19:20:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:02.626 19:20:18 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.626 19:20:18 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:02.626 19:20:18 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:02.884 request: 00:27:02.884 { 00:27:02.884 "method": "env_dpdk_get_mem_stats", 00:27:02.884 "req_id": 1 00:27:02.884 } 00:27:02.884 Got JSON-RPC error response 00:27:02.884 response: 00:27:02.884 { 00:27:02.884 "code": -32601, 00:27:02.884 "message": "Method not found" 00:27:02.884 } 00:27:02.884 19:20:18 -- common/autotest_common.sh@641 -- # es=1 00:27:02.884 19:20:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:02.884 19:20:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:02.884 19:20:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:02.884 19:20:18 -- app/cmdline.sh@1 -- # killprocess 116462 00:27:02.884 19:20:18 -- common/autotest_common.sh@936 -- # '[' -z 116462 ']' 00:27:02.884 19:20:18 -- common/autotest_common.sh@940 -- # kill -0 116462 00:27:02.884 19:20:18 -- common/autotest_common.sh@941 -- # uname 00:27:02.884 19:20:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:02.884 19:20:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116462 00:27:02.884 19:20:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:02.884 19:20:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:02.884 19:20:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116462' 00:27:02.884 killing process with pid 116462 00:27:02.884 19:20:18 -- common/autotest_common.sh@955 -- # kill 116462 00:27:02.884 19:20:18 -- common/autotest_common.sh@960 -- # wait 116462 00:27:06.166 ************************************ 00:27:06.166 END TEST app_cmdline 00:27:06.166 ************************************ 00:27:06.166 00:27:06.166 real 0m5.461s 00:27:06.166 user 0m5.712s 00:27:06.166 sys 0m0.730s 00:27:06.166 19:20:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:06.166 19:20:21 -- common/autotest_common.sh@10 -- # set +x 00:27:06.166 19:20:21 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:27:06.166 19:20:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:06.166 19:20:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.166 19:20:21 -- common/autotest_common.sh@10 -- # set +x 00:27:06.166 ************************************ 00:27:06.166 START TEST version 00:27:06.166 ************************************ 00:27:06.166 19:20:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:27:06.166 * Looking for test storage... 00:27:06.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:27:06.166 19:20:21 -- app/version.sh@17 -- # get_header_version major 00:27:06.166 19:20:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:06.166 19:20:21 -- app/version.sh@14 -- # cut -f2 00:27:06.166 19:20:21 -- app/version.sh@14 -- # tr -d '"' 00:27:06.166 19:20:21 -- app/version.sh@17 -- # major=24 00:27:06.166 19:20:21 -- app/version.sh@18 -- # get_header_version minor 00:27:06.166 19:20:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:06.166 19:20:21 -- app/version.sh@14 -- # cut -f2 00:27:06.166 19:20:21 -- app/version.sh@14 -- # tr -d '"' 00:27:06.166 19:20:21 -- app/version.sh@18 -- # minor=5 00:27:06.166 19:20:21 -- app/version.sh@19 -- # get_header_version patch 00:27:06.166 19:20:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:06.166 19:20:21 -- app/version.sh@14 -- # cut -f2 00:27:06.166 19:20:21 -- app/version.sh@14 -- # tr -d '"' 00:27:06.166 19:20:21 -- app/version.sh@19 -- # patch=0 00:27:06.166 19:20:21 -- app/version.sh@20 -- # get_header_version suffix 00:27:06.166 19:20:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:06.166 19:20:21 -- app/version.sh@14 -- # cut -f2 00:27:06.166 19:20:21 -- app/version.sh@14 -- # tr -d '"' 00:27:06.166 19:20:21 -- app/version.sh@20 -- # suffix=-pre 00:27:06.166 19:20:21 -- app/version.sh@22 -- # version=24.5 00:27:06.166 19:20:21 -- app/version.sh@25 -- # (( patch != 0 )) 00:27:06.166 19:20:21 -- app/version.sh@28 -- # version=24.5rc0 00:27:06.166 19:20:21 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:06.166 19:20:21 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:27:06.166 19:20:22 -- app/version.sh@30 -- # py_version=24.5rc0 00:27:06.166 19:20:22 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:27:06.166 00:27:06.166 real 0m0.162s 00:27:06.166 user 0m0.126s 00:27:06.166 sys 0m0.066s 00:27:06.166 ************************************ 00:27:06.166 END TEST version 00:27:06.166 ************************************ 00:27:06.166 19:20:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:06.166 19:20:22 -- common/autotest_common.sh@10 -- # set +x 00:27:06.166 19:20:22 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:27:06.166 19:20:22 -- spdk/autotest.sh@185 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:27:06.166 19:20:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:06.166 19:20:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.166 19:20:22 -- common/autotest_common.sh@10 -- # set +x 00:27:06.425 ************************************ 00:27:06.425 START TEST blockdev_general 00:27:06.425 ************************************ 00:27:06.425 19:20:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:27:06.425 * Looking for test storage... 00:27:06.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:06.425 19:20:22 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:06.425 19:20:22 -- bdev/nbd_common.sh@6 -- # set -e 00:27:06.425 19:20:22 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:06.425 19:20:22 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:06.425 19:20:22 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:06.425 19:20:22 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:06.425 19:20:22 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:27:06.425 19:20:22 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:27:06.425 19:20:22 -- bdev/blockdev.sh@20 -- # : 00:27:06.425 19:20:22 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:27:06.425 19:20:22 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:27:06.425 19:20:22 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:27:06.425 19:20:22 -- bdev/blockdev.sh@674 -- # uname -s 00:27:06.425 19:20:22 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:27:06.425 19:20:22 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:27:06.425 19:20:22 -- bdev/blockdev.sh@682 -- # test_type=bdev 00:27:06.425 19:20:22 -- bdev/blockdev.sh@683 -- # crypto_device= 00:27:06.425 19:20:22 -- bdev/blockdev.sh@684 -- # dek= 00:27:06.425 19:20:22 -- bdev/blockdev.sh@685 -- # env_ctx= 00:27:06.425 19:20:22 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:27:06.425 19:20:22 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:27:06.425 19:20:22 -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:27:06.425 19:20:22 -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:27:06.425 19:20:22 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:27:06.425 19:20:22 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=116674 00:27:06.425 19:20:22 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:06.426 19:20:22 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:27:06.426 19:20:22 -- bdev/blockdev.sh@49 -- # waitforlisten 116674 00:27:06.426 19:20:22 -- common/autotest_common.sh@817 -- # '[' -z 116674 ']' 00:27:06.426 19:20:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.426 19:20:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:06.426 19:20:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.426 19:20:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:06.426 19:20:22 -- common/autotest_common.sh@10 -- # set +x 00:27:06.426 [2024-04-18 19:20:22.290817] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:27:06.426 [2024-04-18 19:20:22.291231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116674 ] 00:27:06.684 [2024-04-18 19:20:22.470761] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.943 [2024-04-18 19:20:22.765509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.508 19:20:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:07.508 19:20:23 -- common/autotest_common.sh@850 -- # return 0 00:27:07.508 19:20:23 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:27:07.508 19:20:23 -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:27:07.508 19:20:23 -- bdev/blockdev.sh@53 -- # rpc_cmd 00:27:07.508 19:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.508 19:20:23 -- common/autotest_common.sh@10 -- # set +x 00:27:08.881 [2024-04-18 19:20:24.381725] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:08.881 [2024-04-18 19:20:24.381873] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:08.881 00:27:08.881 [2024-04-18 19:20:24.389742] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:08.881 [2024-04-18 19:20:24.389987] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:08.881 00:27:08.881 Malloc0 00:27:08.881 Malloc1 00:27:08.881 Malloc2 00:27:08.881 Malloc3 00:27:08.881 Malloc4 00:27:08.881 Malloc5 00:27:08.881 Malloc6 00:27:09.139 Malloc7 00:27:09.139 Malloc8 00:27:09.139 Malloc9 00:27:09.139 [2024-04-18 19:20:24.958534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:09.140 [2024-04-18 19:20:24.960457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.140 [2024-04-18 19:20:24.960549] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:27:09.140 [2024-04-18 19:20:24.960753] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.140 [2024-04-18 19:20:24.963549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.140 [2024-04-18 19:20:24.963778] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:27:09.140 TestPT 00:27:09.140 19:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.140 19:20:25 -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:27:09.140 5000+0 records in 00:27:09.140 5000+0 records out 00:27:09.140 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0306296 s, 334 MB/s 00:27:09.140 19:20:25 -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:27:09.140 19:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.140 19:20:25 -- common/autotest_common.sh@10 -- # set +x 00:27:09.399 AIO0 00:27:09.399 19:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.399 19:20:25 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:27:09.399 19:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.399 19:20:25 -- common/autotest_common.sh@10 -- # set +x 00:27:09.399 19:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.399 19:20:25 -- bdev/blockdev.sh@740 -- # cat 00:27:09.399 19:20:25 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:27:09.399 19:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.399 19:20:25 -- common/autotest_common.sh@10 -- # set +x 00:27:09.399 19:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.399 19:20:25 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:27:09.399 19:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.399 19:20:25 -- common/autotest_common.sh@10 -- # set +x 00:27:09.399 19:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.399 19:20:25 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:09.399 19:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.399 19:20:25 -- common/autotest_common.sh@10 -- # set +x 00:27:09.399 19:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.399 19:20:25 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:27:09.399 19:20:25 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:27:09.399 19:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.399 19:20:25 -- common/autotest_common.sh@10 -- # set +x 00:27:09.399 19:20:25 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:27:09.399 19:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.399 19:20:25 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:27:09.399 19:20:25 -- bdev/blockdev.sh@749 -- # jq -r .name 00:27:09.401 19:20:25 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "dadf98a7-65c1-4723-983e-8da8d05d48f2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "dadf98a7-65c1-4723-983e-8da8d05d48f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ba679c4a-180f-5a92-aca7-776982b68bd2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ba679c4a-180f-5a92-aca7-776982b68bd2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "63fb2c7b-57e5-5fd7-9a8c-3ae88cce2bc0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "63fb2c7b-57e5-5fd7-9a8c-3ae88cce2bc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "e53c50ed-ae7b-5871-9edc-8ae9472d4bdd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e53c50ed-ae7b-5871-9edc-8ae9472d4bdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "20b241fb-2131-58ac-a02a-42e1a8a4af01"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20b241fb-2131-58ac-a02a-42e1a8a4af01",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f4d44533-0704-541f-a2a2-5ba45f2d2557"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f4d44533-0704-541f-a2a2-5ba45f2d2557",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "dc5105fa-821c-5461-9814-fe8ee16bb0b2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dc5105fa-821c-5461-9814-fe8ee16bb0b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e99a37b4-4e2b-5ed5-880f-f6342e6900bb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e99a37b4-4e2b-5ed5-880f-f6342e6900bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "ffe32405-bae3-544f-8cf6-b8488ae72079"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ffe32405-bae3-544f-8cf6-b8488ae72079",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "8a0f05cf-7ae7-5128-a9e8-2a34f5912105"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8a0f05cf-7ae7-5128-a9e8-2a34f5912105",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0422e0b9-bedd-57f2-9157-4caf686f556e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0422e0b9-bedd-57f2-9157-4caf686f556e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "97e95f00-dfa1-5f3b-a65d-899ef5bdd73a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "97e95f00-dfa1-5f3b-a65d-899ef5bdd73a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "8a6767a6-ed2c-4be0-b1f4-414d69803396"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8a6767a6-ed2c-4be0-b1f4-414d69803396",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8a6767a6-ed2c-4be0-b1f4-414d69803396",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "9cd96815-af1e-4707-a06e-5a804b3a447f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "4819b71c-3a88-4c5e-92d9-631a07ccdf82",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b702b067-b7b6-4614-bcba-4112c5b421e9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b702b067-b7b6-4614-bcba-4112c5b421e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b702b067-b7b6-4614-bcba-4112c5b421e9",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "db5e1962-fbfb-4f16-838f-fbba9281c8b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "16336309-a6e2-48cf-a365-c2ff69474d48",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "28b3d2ae-a464-4566-975c-98b3f93d1f91"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "28b3d2ae-a464-4566-975c-98b3f93d1f91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "28b3d2ae-a464-4566-975c-98b3f93d1f91",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "902b37f4-83d4-4ced-99c2-a189470e8114",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "85524745-491f-4506-b55a-9e3ff3e5caa7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b3790576-8a27-48bb-86c4-673bba9a0bf1"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b3790576-8a27-48bb-86c4-673bba9a0bf1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:27:09.401 19:20:25 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:27:09.401 19:20:25 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:27:09.401 19:20:25 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:27:09.401 19:20:25 -- bdev/blockdev.sh@754 -- # killprocess 116674 00:27:09.401 19:20:25 -- common/autotest_common.sh@936 -- # '[' -z 116674 ']' 00:27:09.401 19:20:25 -- common/autotest_common.sh@940 -- # kill -0 116674 00:27:09.401 19:20:25 -- common/autotest_common.sh@941 -- # uname 00:27:09.401 19:20:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:09.401 19:20:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116674 00:27:09.401 19:20:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:09.401 19:20:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:09.401 19:20:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116674' 00:27:09.401 killing process with pid 116674 00:27:09.401 19:20:25 -- common/autotest_common.sh@955 -- # kill 116674 00:27:09.401 19:20:25 -- common/autotest_common.sh@960 -- # wait 116674 00:27:13.653 19:20:29 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:13.653 19:20:29 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:27:13.653 19:20:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:27:13.653 19:20:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:13.653 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:27:13.653 ************************************ 00:27:13.653 START TEST bdev_hello_world 00:27:13.653 ************************************ 00:27:13.653 19:20:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:27:13.653 [2024-04-18 19:20:29.496336] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:27:13.653 [2024-04-18 19:20:29.496725] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116812 ] 00:27:13.914 [2024-04-18 19:20:29.667408] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.173 [2024-04-18 19:20:29.905432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.743 [2024-04-18 19:20:30.397529] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:14.743 [2024-04-18 19:20:30.397936] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:14.743 [2024-04-18 19:20:30.405471] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:14.743 [2024-04-18 19:20:30.405622] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:14.743 [2024-04-18 19:20:30.413518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:14.743 [2024-04-18 19:20:30.413816] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:14.743 [2024-04-18 19:20:30.413967] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:14.743 [2024-04-18 19:20:30.653893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:14.743 [2024-04-18 19:20:30.655947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:14.743 [2024-04-18 19:20:30.656127] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:14.743 [2024-04-18 19:20:30.656234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:14.743 [2024-04-18 19:20:30.658939] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:14.743 [2024-04-18 19:20:30.659146] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:27:15.309 [2024-04-18 19:20:31.028029] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:15.309 [2024-04-18 19:20:31.028330] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:27:15.309 [2024-04-18 19:20:31.028446] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:15.309 [2024-04-18 19:20:31.028608] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:15.309 [2024-04-18 19:20:31.028714] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:15.309 [2024-04-18 19:20:31.028924] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:15.309 [2024-04-18 19:20:31.029042] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:15.309 00:27:15.309 [2024-04-18 19:20:31.029103] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:17.839 ************************************ 00:27:17.839 END TEST bdev_hello_world 00:27:17.839 ************************************ 00:27:17.839 00:27:17.839 real 0m4.299s 00:27:17.839 user 0m3.826s 00:27:17.839 sys 0m0.316s 00:27:17.839 19:20:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:17.839 19:20:33 -- common/autotest_common.sh@10 -- # set +x 00:27:17.839 19:20:33 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:27:17.839 19:20:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:17.839 19:20:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:17.839 19:20:33 -- common/autotest_common.sh@10 -- # set +x 00:27:18.098 ************************************ 00:27:18.098 START TEST bdev_bounds 00:27:18.098 ************************************ 00:27:18.098 19:20:33 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:27:18.098 19:20:33 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:18.098 19:20:33 -- bdev/blockdev.sh@290 -- # bdevio_pid=116887 00:27:18.098 19:20:33 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:18.098 19:20:33 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 116887' 00:27:18.098 Process bdevio pid: 116887 00:27:18.098 19:20:33 -- bdev/blockdev.sh@293 -- # waitforlisten 116887 00:27:18.098 19:20:33 -- common/autotest_common.sh@817 -- # '[' -z 116887 ']' 00:27:18.098 19:20:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.098 19:20:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:18.098 19:20:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.098 19:20:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:18.098 19:20:33 -- common/autotest_common.sh@10 -- # set +x 00:27:18.098 [2024-04-18 19:20:33.896408] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:27:18.098 [2024-04-18 19:20:33.896771] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116887 ] 00:27:18.360 [2024-04-18 19:20:34.088107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:18.621 [2024-04-18 19:20:34.342323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.621 [2024-04-18 19:20:34.342394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.621 [2024-04-18 19:20:34.342392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:19.187 [2024-04-18 19:20:34.820378] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:19.187 [2024-04-18 19:20:34.820746] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:19.187 [2024-04-18 19:20:34.828345] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:19.187 [2024-04-18 19:20:34.828613] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:19.187 [2024-04-18 19:20:34.836385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:19.187 [2024-04-18 19:20:34.836639] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:19.187 [2024-04-18 19:20:34.836763] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:19.187 [2024-04-18 19:20:35.069634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:19.187 [2024-04-18 19:20:35.070001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.187 [2024-04-18 19:20:35.070090] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:19.187 [2024-04-18 19:20:35.070224] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.187 [2024-04-18 19:20:35.073121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.187 [2024-04-18 19:20:35.073349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:27:19.754 19:20:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:19.754 19:20:35 -- common/autotest_common.sh@850 -- # return 0 00:27:19.754 19:20:35 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:19.754 I/O targets: 00:27:19.754 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:27:19.754 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:27:19.754 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:27:19.755 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:27:19.755 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:27:19.755 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:27:19.755 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:27:19.755 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:27:19.755 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:27:19.755 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:27:19.755 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:27:19.755 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:27:19.755 raid0: 131072 blocks of 512 bytes (64 MiB) 00:27:19.755 concat0: 131072 blocks of 512 bytes (64 MiB) 00:27:19.755 raid1: 65536 blocks of 512 bytes (32 MiB) 00:27:19.755 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:27:19.755 00:27:19.755 00:27:19.755 CUnit - A unit testing framework for C - Version 2.1-3 00:27:19.755 http://cunit.sourceforge.net/ 00:27:19.755 00:27:19.755 00:27:19.755 Suite: bdevio tests on: AIO0 00:27:19.755 Test: blockdev write read block ...passed 00:27:19.755 Test: blockdev write zeroes read block ...passed 00:27:19.755 Test: blockdev write zeroes read no split ...passed 00:27:19.755 Test: blockdev write zeroes read split ...passed 00:27:20.013 Test: blockdev write zeroes read split partial ...passed 00:27:20.013 Test: blockdev reset ...passed 00:27:20.013 Test: blockdev write read 8 blocks ...passed 00:27:20.013 Test: blockdev write read size > 128k ...passed 00:27:20.013 Test: blockdev write read invalid size ...passed 00:27:20.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.013 Test: blockdev write read max offset ...passed 00:27:20.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.013 Test: blockdev writev readv 8 blocks ...passed 00:27:20.013 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.013 Test: blockdev writev readv block ...passed 00:27:20.013 Test: blockdev writev readv size > 128k ...passed 00:27:20.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.013 Test: blockdev comparev and writev ...passed 00:27:20.013 Test: blockdev nvme passthru rw ...passed 00:27:20.013 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.013 Test: blockdev nvme admin passthru ...passed 00:27:20.013 Test: blockdev copy ...passed 00:27:20.013 Suite: bdevio tests on: raid1 00:27:20.013 Test: blockdev write read block ...passed 00:27:20.013 Test: blockdev write zeroes read block ...passed 00:27:20.013 Test: blockdev write zeroes read no split ...passed 00:27:20.013 Test: blockdev write zeroes read split ...passed 00:27:20.013 Test: blockdev write zeroes read split partial ...passed 00:27:20.013 Test: blockdev reset ...passed 00:27:20.013 Test: blockdev write read 8 blocks ...passed 00:27:20.013 Test: blockdev write read size > 128k ...passed 00:27:20.013 Test: blockdev write read invalid size ...passed 00:27:20.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.013 Test: blockdev write read max offset ...passed 00:27:20.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.013 Test: blockdev writev readv 8 blocks ...passed 00:27:20.013 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.013 Test: blockdev writev readv block ...passed 00:27:20.013 Test: blockdev writev readv size > 128k ...passed 00:27:20.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.013 Test: blockdev comparev and writev ...passed 00:27:20.013 Test: blockdev nvme passthru rw ...passed 00:27:20.013 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.013 Test: blockdev nvme admin passthru ...passed 00:27:20.013 Test: blockdev copy ...passed 00:27:20.013 Suite: bdevio tests on: concat0 00:27:20.013 Test: blockdev write read block ...passed 00:27:20.013 Test: blockdev write zeroes read block ...passed 00:27:20.013 Test: blockdev write zeroes read no split ...passed 00:27:20.013 Test: blockdev write zeroes read split ...passed 00:27:20.013 Test: blockdev write zeroes read split partial ...passed 00:27:20.013 Test: blockdev reset ...passed 00:27:20.013 Test: blockdev write read 8 blocks ...passed 00:27:20.013 Test: blockdev write read size > 128k ...passed 00:27:20.013 Test: blockdev write read invalid size ...passed 00:27:20.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.013 Test: blockdev write read max offset ...passed 00:27:20.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.013 Test: blockdev writev readv 8 blocks ...passed 00:27:20.013 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.013 Test: blockdev writev readv block ...passed 00:27:20.013 Test: blockdev writev readv size > 128k ...passed 00:27:20.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.013 Test: blockdev comparev and writev ...passed 00:27:20.013 Test: blockdev nvme passthru rw ...passed 00:27:20.013 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.013 Test: blockdev nvme admin passthru ...passed 00:27:20.013 Test: blockdev copy ...passed 00:27:20.013 Suite: bdevio tests on: raid0 00:27:20.013 Test: blockdev write read block ...passed 00:27:20.013 Test: blockdev write zeroes read block ...passed 00:27:20.013 Test: blockdev write zeroes read no split ...passed 00:27:20.270 Test: blockdev write zeroes read split ...passed 00:27:20.270 Test: blockdev write zeroes read split partial ...passed 00:27:20.270 Test: blockdev reset ...passed 00:27:20.270 Test: blockdev write read 8 blocks ...passed 00:27:20.270 Test: blockdev write read size > 128k ...passed 00:27:20.270 Test: blockdev write read invalid size ...passed 00:27:20.270 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.270 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.270 Test: blockdev write read max offset ...passed 00:27:20.270 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.270 Test: blockdev writev readv 8 blocks ...passed 00:27:20.270 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.270 Test: blockdev writev readv block ...passed 00:27:20.270 Test: blockdev writev readv size > 128k ...passed 00:27:20.270 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.270 Test: blockdev comparev and writev ...passed 00:27:20.270 Test: blockdev nvme passthru rw ...passed 00:27:20.270 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.270 Test: blockdev nvme admin passthru ...passed 00:27:20.270 Test: blockdev copy ...passed 00:27:20.270 Suite: bdevio tests on: TestPT 00:27:20.270 Test: blockdev write read block ...passed 00:27:20.271 Test: blockdev write zeroes read block ...passed 00:27:20.271 Test: blockdev write zeroes read no split ...passed 00:27:20.271 Test: blockdev write zeroes read split ...passed 00:27:20.271 Test: blockdev write zeroes read split partial ...passed 00:27:20.271 Test: blockdev reset ...passed 00:27:20.271 Test: blockdev write read 8 blocks ...passed 00:27:20.271 Test: blockdev write read size > 128k ...passed 00:27:20.271 Test: blockdev write read invalid size ...passed 00:27:20.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.271 Test: blockdev write read max offset ...passed 00:27:20.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.271 Test: blockdev writev readv 8 blocks ...passed 00:27:20.271 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.271 Test: blockdev writev readv block ...passed 00:27:20.271 Test: blockdev writev readv size > 128k ...passed 00:27:20.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.271 Test: blockdev comparev and writev ...passed 00:27:20.271 Test: blockdev nvme passthru rw ...passed 00:27:20.271 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.271 Test: blockdev nvme admin passthru ...passed 00:27:20.271 Test: blockdev copy ...passed 00:27:20.271 Suite: bdevio tests on: Malloc2p7 00:27:20.271 Test: blockdev write read block ...passed 00:27:20.271 Test: blockdev write zeroes read block ...passed 00:27:20.271 Test: blockdev write zeroes read no split ...passed 00:27:20.271 Test: blockdev write zeroes read split ...passed 00:27:20.271 Test: blockdev write zeroes read split partial ...passed 00:27:20.271 Test: blockdev reset ...passed 00:27:20.271 Test: blockdev write read 8 blocks ...passed 00:27:20.271 Test: blockdev write read size > 128k ...passed 00:27:20.528 Test: blockdev write read invalid size ...passed 00:27:20.528 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.528 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.528 Test: blockdev write read max offset ...passed 00:27:20.528 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.528 Test: blockdev writev readv 8 blocks ...passed 00:27:20.528 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.528 Test: blockdev writev readv block ...passed 00:27:20.528 Test: blockdev writev readv size > 128k ...passed 00:27:20.528 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.528 Test: blockdev comparev and writev ...passed 00:27:20.528 Test: blockdev nvme passthru rw ...passed 00:27:20.528 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.528 Test: blockdev nvme admin passthru ...passed 00:27:20.528 Test: blockdev copy ...passed 00:27:20.528 Suite: bdevio tests on: Malloc2p6 00:27:20.528 Test: blockdev write read block ...passed 00:27:20.528 Test: blockdev write zeroes read block ...passed 00:27:20.528 Test: blockdev write zeroes read no split ...passed 00:27:20.528 Test: blockdev write zeroes read split ...passed 00:27:20.528 Test: blockdev write zeroes read split partial ...passed 00:27:20.528 Test: blockdev reset ...passed 00:27:20.528 Test: blockdev write read 8 blocks ...passed 00:27:20.528 Test: blockdev write read size > 128k ...passed 00:27:20.528 Test: blockdev write read invalid size ...passed 00:27:20.528 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.528 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.528 Test: blockdev write read max offset ...passed 00:27:20.528 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.528 Test: blockdev writev readv 8 blocks ...passed 00:27:20.528 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.528 Test: blockdev writev readv block ...passed 00:27:20.528 Test: blockdev writev readv size > 128k ...passed 00:27:20.528 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.528 Test: blockdev comparev and writev ...passed 00:27:20.528 Test: blockdev nvme passthru rw ...passed 00:27:20.528 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.528 Test: blockdev nvme admin passthru ...passed 00:27:20.528 Test: blockdev copy ...passed 00:27:20.528 Suite: bdevio tests on: Malloc2p5 00:27:20.528 Test: blockdev write read block ...passed 00:27:20.528 Test: blockdev write zeroes read block ...passed 00:27:20.528 Test: blockdev write zeroes read no split ...passed 00:27:20.528 Test: blockdev write zeroes read split ...passed 00:27:20.528 Test: blockdev write zeroes read split partial ...passed 00:27:20.528 Test: blockdev reset ...passed 00:27:20.528 Test: blockdev write read 8 blocks ...passed 00:27:20.528 Test: blockdev write read size > 128k ...passed 00:27:20.528 Test: blockdev write read invalid size ...passed 00:27:20.528 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.528 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.528 Test: blockdev write read max offset ...passed 00:27:20.528 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.528 Test: blockdev writev readv 8 blocks ...passed 00:27:20.528 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.528 Test: blockdev writev readv block ...passed 00:27:20.528 Test: blockdev writev readv size > 128k ...passed 00:27:20.528 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.528 Test: blockdev comparev and writev ...passed 00:27:20.528 Test: blockdev nvme passthru rw ...passed 00:27:20.528 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.528 Test: blockdev nvme admin passthru ...passed 00:27:20.528 Test: blockdev copy ...passed 00:27:20.528 Suite: bdevio tests on: Malloc2p4 00:27:20.528 Test: blockdev write read block ...passed 00:27:20.528 Test: blockdev write zeroes read block ...passed 00:27:20.528 Test: blockdev write zeroes read no split ...passed 00:27:20.528 Test: blockdev write zeroes read split ...passed 00:27:20.787 Test: blockdev write zeroes read split partial ...passed 00:27:20.787 Test: blockdev reset ...passed 00:27:20.787 Test: blockdev write read 8 blocks ...passed 00:27:20.787 Test: blockdev write read size > 128k ...passed 00:27:20.787 Test: blockdev write read invalid size ...passed 00:27:20.787 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.787 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.787 Test: blockdev write read max offset ...passed 00:27:20.787 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.787 Test: blockdev writev readv 8 blocks ...passed 00:27:20.787 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.787 Test: blockdev writev readv block ...passed 00:27:20.787 Test: blockdev writev readv size > 128k ...passed 00:27:20.787 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.787 Test: blockdev comparev and writev ...passed 00:27:20.787 Test: blockdev nvme passthru rw ...passed 00:27:20.787 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.787 Test: blockdev nvme admin passthru ...passed 00:27:20.787 Test: blockdev copy ...passed 00:27:20.787 Suite: bdevio tests on: Malloc2p3 00:27:20.787 Test: blockdev write read block ...passed 00:27:20.787 Test: blockdev write zeroes read block ...passed 00:27:20.787 Test: blockdev write zeroes read no split ...passed 00:27:20.787 Test: blockdev write zeroes read split ...passed 00:27:20.787 Test: blockdev write zeroes read split partial ...passed 00:27:20.787 Test: blockdev reset ...passed 00:27:20.787 Test: blockdev write read 8 blocks ...passed 00:27:20.787 Test: blockdev write read size > 128k ...passed 00:27:20.787 Test: blockdev write read invalid size ...passed 00:27:20.787 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.787 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.787 Test: blockdev write read max offset ...passed 00:27:20.787 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.787 Test: blockdev writev readv 8 blocks ...passed 00:27:20.787 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.787 Test: blockdev writev readv block ...passed 00:27:20.787 Test: blockdev writev readv size > 128k ...passed 00:27:20.787 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.787 Test: blockdev comparev and writev ...passed 00:27:20.787 Test: blockdev nvme passthru rw ...passed 00:27:20.787 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.787 Test: blockdev nvme admin passthru ...passed 00:27:20.787 Test: blockdev copy ...passed 00:27:20.787 Suite: bdevio tests on: Malloc2p2 00:27:20.787 Test: blockdev write read block ...passed 00:27:20.787 Test: blockdev write zeroes read block ...passed 00:27:20.787 Test: blockdev write zeroes read no split ...passed 00:27:20.787 Test: blockdev write zeroes read split ...passed 00:27:20.787 Test: blockdev write zeroes read split partial ...passed 00:27:20.787 Test: blockdev reset ...passed 00:27:20.787 Test: blockdev write read 8 blocks ...passed 00:27:20.787 Test: blockdev write read size > 128k ...passed 00:27:20.787 Test: blockdev write read invalid size ...passed 00:27:20.787 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:20.787 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:20.787 Test: blockdev write read max offset ...passed 00:27:20.787 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:20.787 Test: blockdev writev readv 8 blocks ...passed 00:27:20.787 Test: blockdev writev readv 30 x 1block ...passed 00:27:20.787 Test: blockdev writev readv block ...passed 00:27:20.787 Test: blockdev writev readv size > 128k ...passed 00:27:20.787 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:20.787 Test: blockdev comparev and writev ...passed 00:27:20.787 Test: blockdev nvme passthru rw ...passed 00:27:20.787 Test: blockdev nvme passthru vendor specific ...passed 00:27:20.787 Test: blockdev nvme admin passthru ...passed 00:27:20.787 Test: blockdev copy ...passed 00:27:20.787 Suite: bdevio tests on: Malloc2p1 00:27:20.787 Test: blockdev write read block ...passed 00:27:20.787 Test: blockdev write zeroes read block ...passed 00:27:20.787 Test: blockdev write zeroes read no split ...passed 00:27:20.787 Test: blockdev write zeroes read split ...passed 00:27:21.047 Test: blockdev write zeroes read split partial ...passed 00:27:21.047 Test: blockdev reset ...passed 00:27:21.047 Test: blockdev write read 8 blocks ...passed 00:27:21.047 Test: blockdev write read size > 128k ...passed 00:27:21.047 Test: blockdev write read invalid size ...passed 00:27:21.047 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:21.047 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:21.047 Test: blockdev write read max offset ...passed 00:27:21.047 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:21.047 Test: blockdev writev readv 8 blocks ...passed 00:27:21.047 Test: blockdev writev readv 30 x 1block ...passed 00:27:21.047 Test: blockdev writev readv block ...passed 00:27:21.047 Test: blockdev writev readv size > 128k ...passed 00:27:21.047 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:21.047 Test: blockdev comparev and writev ...passed 00:27:21.047 Test: blockdev nvme passthru rw ...passed 00:27:21.047 Test: blockdev nvme passthru vendor specific ...passed 00:27:21.047 Test: blockdev nvme admin passthru ...passed 00:27:21.047 Test: blockdev copy ...passed 00:27:21.047 Suite: bdevio tests on: Malloc2p0 00:27:21.047 Test: blockdev write read block ...passed 00:27:21.047 Test: blockdev write zeroes read block ...passed 00:27:21.047 Test: blockdev write zeroes read no split ...passed 00:27:21.047 Test: blockdev write zeroes read split ...passed 00:27:21.047 Test: blockdev write zeroes read split partial ...passed 00:27:21.047 Test: blockdev reset ...passed 00:27:21.047 Test: blockdev write read 8 blocks ...passed 00:27:21.047 Test: blockdev write read size > 128k ...passed 00:27:21.047 Test: blockdev write read invalid size ...passed 00:27:21.047 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:21.047 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:21.047 Test: blockdev write read max offset ...passed 00:27:21.047 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:21.047 Test: blockdev writev readv 8 blocks ...passed 00:27:21.047 Test: blockdev writev readv 30 x 1block ...passed 00:27:21.047 Test: blockdev writev readv block ...passed 00:27:21.047 Test: blockdev writev readv size > 128k ...passed 00:27:21.047 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:21.047 Test: blockdev comparev and writev ...passed 00:27:21.047 Test: blockdev nvme passthru rw ...passed 00:27:21.047 Test: blockdev nvme passthru vendor specific ...passed 00:27:21.047 Test: blockdev nvme admin passthru ...passed 00:27:21.047 Test: blockdev copy ...passed 00:27:21.047 Suite: bdevio tests on: Malloc1p1 00:27:21.047 Test: blockdev write read block ...passed 00:27:21.047 Test: blockdev write zeroes read block ...passed 00:27:21.047 Test: blockdev write zeroes read no split ...passed 00:27:21.047 Test: blockdev write zeroes read split ...passed 00:27:21.047 Test: blockdev write zeroes read split partial ...passed 00:27:21.047 Test: blockdev reset ...passed 00:27:21.047 Test: blockdev write read 8 blocks ...passed 00:27:21.047 Test: blockdev write read size > 128k ...passed 00:27:21.047 Test: blockdev write read invalid size ...passed 00:27:21.047 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:21.047 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:21.047 Test: blockdev write read max offset ...passed 00:27:21.047 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:21.047 Test: blockdev writev readv 8 blocks ...passed 00:27:21.047 Test: blockdev writev readv 30 x 1block ...passed 00:27:21.047 Test: blockdev writev readv block ...passed 00:27:21.047 Test: blockdev writev readv size > 128k ...passed 00:27:21.047 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:21.047 Test: blockdev comparev and writev ...passed 00:27:21.047 Test: blockdev nvme passthru rw ...passed 00:27:21.047 Test: blockdev nvme passthru vendor specific ...passed 00:27:21.047 Test: blockdev nvme admin passthru ...passed 00:27:21.047 Test: blockdev copy ...passed 00:27:21.047 Suite: bdevio tests on: Malloc1p0 00:27:21.047 Test: blockdev write read block ...passed 00:27:21.047 Test: blockdev write zeroes read block ...passed 00:27:21.047 Test: blockdev write zeroes read no split ...passed 00:27:21.047 Test: blockdev write zeroes read split ...passed 00:27:21.307 Test: blockdev write zeroes read split partial ...passed 00:27:21.307 Test: blockdev reset ...passed 00:27:21.307 Test: blockdev write read 8 blocks ...passed 00:27:21.307 Test: blockdev write read size > 128k ...passed 00:27:21.307 Test: blockdev write read invalid size ...passed 00:27:21.307 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:21.307 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:21.307 Test: blockdev write read max offset ...passed 00:27:21.307 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:21.307 Test: blockdev writev readv 8 blocks ...passed 00:27:21.307 Test: blockdev writev readv 30 x 1block ...passed 00:27:21.307 Test: blockdev writev readv block ...passed 00:27:21.307 Test: blockdev writev readv size > 128k ...passed 00:27:21.307 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:21.307 Test: blockdev comparev and writev ...passed 00:27:21.307 Test: blockdev nvme passthru rw ...passed 00:27:21.307 Test: blockdev nvme passthru vendor specific ...passed 00:27:21.307 Test: blockdev nvme admin passthru ...passed 00:27:21.307 Test: blockdev copy ...passed 00:27:21.307 Suite: bdevio tests on: Malloc0 00:27:21.307 Test: blockdev write read block ...passed 00:27:21.307 Test: blockdev write zeroes read block ...passed 00:27:21.307 Test: blockdev write zeroes read no split ...passed 00:27:21.307 Test: blockdev write zeroes read split ...passed 00:27:21.307 Test: blockdev write zeroes read split partial ...passed 00:27:21.307 Test: blockdev reset ...passed 00:27:21.307 Test: blockdev write read 8 blocks ...passed 00:27:21.307 Test: blockdev write read size > 128k ...passed 00:27:21.307 Test: blockdev write read invalid size ...passed 00:27:21.307 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:21.307 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:21.307 Test: blockdev write read max offset ...passed 00:27:21.307 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:21.307 Test: blockdev writev readv 8 blocks ...passed 00:27:21.307 Test: blockdev writev readv 30 x 1block ...passed 00:27:21.307 Test: blockdev writev readv block ...passed 00:27:21.307 Test: blockdev writev readv size > 128k ...passed 00:27:21.307 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:21.307 Test: blockdev comparev and writev ...passed 00:27:21.307 Test: blockdev nvme passthru rw ...passed 00:27:21.307 Test: blockdev nvme passthru vendor specific ...passed 00:27:21.307 Test: blockdev nvme admin passthru ...passed 00:27:21.307 Test: blockdev copy ...passed 00:27:21.307 00:27:21.307 Run Summary: Type Total Ran Passed Failed Inactive 00:27:21.307 suites 16 16 n/a 0 0 00:27:21.307 tests 368 368 368 0 0 00:27:21.307 asserts 2224 2224 2224 0 n/a 00:27:21.307 00:27:21.307 Elapsed time = 4.298 seconds 00:27:21.307 0 00:27:21.307 19:20:37 -- bdev/blockdev.sh@295 -- # killprocess 116887 00:27:21.307 19:20:37 -- common/autotest_common.sh@936 -- # '[' -z 116887 ']' 00:27:21.307 19:20:37 -- common/autotest_common.sh@940 -- # kill -0 116887 00:27:21.307 19:20:37 -- common/autotest_common.sh@941 -- # uname 00:27:21.307 19:20:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:21.307 19:20:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116887 00:27:21.307 killing process with pid 116887 00:27:21.307 19:20:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:21.307 19:20:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:21.307 19:20:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116887' 00:27:21.307 19:20:37 -- common/autotest_common.sh@955 -- # kill 116887 00:27:21.307 19:20:37 -- common/autotest_common.sh@960 -- # wait 116887 00:27:23.839 ************************************ 00:27:23.839 END TEST bdev_bounds 00:27:23.839 ************************************ 00:27:23.839 19:20:39 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:27:23.839 00:27:23.839 real 0m5.889s 00:27:23.839 user 0m15.333s 00:27:23.839 sys 0m0.545s 00:27:23.839 19:20:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:23.839 19:20:39 -- common/autotest_common.sh@10 -- # set +x 00:27:23.839 19:20:39 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:27:23.839 19:20:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:23.839 19:20:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:23.839 19:20:39 -- common/autotest_common.sh@10 -- # set +x 00:27:23.839 ************************************ 00:27:23.839 START TEST bdev_nbd 00:27:23.839 ************************************ 00:27:23.839 19:20:39 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:27:23.839 19:20:39 -- bdev/blockdev.sh@300 -- # uname -s 00:27:23.839 19:20:39 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:27:23.839 19:20:39 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:23.839 19:20:39 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:23.839 19:20:39 -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:27:23.839 19:20:39 -- bdev/blockdev.sh@304 -- # local bdev_all 00:27:23.839 19:20:39 -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:27:23.839 19:20:39 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:27:23.839 19:20:39 -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:27:23.839 19:20:39 -- bdev/blockdev.sh@311 -- # local nbd_all 00:27:23.839 19:20:39 -- bdev/blockdev.sh@312 -- # bdev_num=16 00:27:23.839 19:20:39 -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:27:23.839 19:20:39 -- bdev/blockdev.sh@314 -- # local nbd_list 00:27:23.839 19:20:39 -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:27:23.839 19:20:39 -- bdev/blockdev.sh@315 -- # local bdev_list 00:27:23.839 19:20:39 -- bdev/blockdev.sh@318 -- # nbd_pid=117021 00:27:23.839 19:20:39 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:23.839 19:20:39 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:23.839 19:20:39 -- bdev/blockdev.sh@320 -- # waitforlisten 117021 /var/tmp/spdk-nbd.sock 00:27:23.839 19:20:39 -- common/autotest_common.sh@817 -- # '[' -z 117021 ']' 00:27:23.839 19:20:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:23.839 19:20:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:23.839 19:20:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:23.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:23.839 19:20:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:23.839 19:20:39 -- common/autotest_common.sh@10 -- # set +x 00:27:24.098 [2024-04-18 19:20:39.852140] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:27:24.098 [2024-04-18 19:20:39.852620] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.098 [2024-04-18 19:20:40.012118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.357 [2024-04-18 19:20:40.213752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.924 [2024-04-18 19:20:40.595477] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:24.924 [2024-04-18 19:20:40.595777] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:24.924 [2024-04-18 19:20:40.603431] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:24.924 [2024-04-18 19:20:40.603634] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:24.924 [2024-04-18 19:20:40.611448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:24.924 [2024-04-18 19:20:40.611610] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:24.924 [2024-04-18 19:20:40.611782] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:24.924 [2024-04-18 19:20:40.810376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:24.924 [2024-04-18 19:20:40.810726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:24.924 [2024-04-18 19:20:40.810895] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:24.924 [2024-04-18 19:20:40.810998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:24.924 [2024-04-18 19:20:40.813552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:24.924 [2024-04-18 19:20:40.813724] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:27:25.491 19:20:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:25.491 19:20:41 -- common/autotest_common.sh@850 -- # return 0 00:27:25.491 19:20:41 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@24 -- # local i 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:25.491 19:20:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:27:25.749 19:20:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:25.749 19:20:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:25.749 19:20:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:25.749 19:20:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:25.749 19:20:41 -- common/autotest_common.sh@855 -- # local i 00:27:25.749 19:20:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:25.749 19:20:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:25.749 19:20:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:25.749 19:20:41 -- common/autotest_common.sh@859 -- # break 00:27:25.749 19:20:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:25.749 19:20:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:25.749 19:20:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:25.749 1+0 records in 00:27:25.749 1+0 records out 00:27:25.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516324 s, 7.9 MB/s 00:27:25.749 19:20:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:25.749 19:20:41 -- common/autotest_common.sh@872 -- # size=4096 00:27:25.749 19:20:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:25.749 19:20:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:25.749 19:20:41 -- common/autotest_common.sh@875 -- # return 0 00:27:25.749 19:20:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:25.749 19:20:41 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:25.749 19:20:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:27:26.006 19:20:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:26.006 19:20:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:26.006 19:20:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:26.006 19:20:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:27:26.006 19:20:41 -- common/autotest_common.sh@855 -- # local i 00:27:26.006 19:20:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:26.007 19:20:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:26.007 19:20:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:27:26.007 19:20:41 -- common/autotest_common.sh@859 -- # break 00:27:26.007 19:20:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:26.007 19:20:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:26.007 19:20:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:26.007 1+0 records in 00:27:26.007 1+0 records out 00:27:26.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000995619 s, 4.1 MB/s 00:27:26.007 19:20:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.007 19:20:41 -- common/autotest_common.sh@872 -- # size=4096 00:27:26.007 19:20:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.007 19:20:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:26.007 19:20:41 -- common/autotest_common.sh@875 -- # return 0 00:27:26.007 19:20:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:26.007 19:20:41 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:26.007 19:20:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:27:26.265 19:20:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:27:26.265 19:20:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:27:26.265 19:20:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:27:26.265 19:20:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:27:26.265 19:20:42 -- common/autotest_common.sh@855 -- # local i 00:27:26.265 19:20:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:26.265 19:20:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:26.265 19:20:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:27:26.265 19:20:42 -- common/autotest_common.sh@859 -- # break 00:27:26.265 19:20:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:26.265 19:20:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:26.265 19:20:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:26.265 1+0 records in 00:27:26.265 1+0 records out 00:27:26.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498623 s, 8.2 MB/s 00:27:26.265 19:20:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.265 19:20:42 -- common/autotest_common.sh@872 -- # size=4096 00:27:26.265 19:20:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.265 19:20:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:26.265 19:20:42 -- common/autotest_common.sh@875 -- # return 0 00:27:26.265 19:20:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:26.265 19:20:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:26.265 19:20:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:27:26.523 19:20:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:27:26.523 19:20:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:27:26.523 19:20:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:27:26.523 19:20:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:27:26.523 19:20:42 -- common/autotest_common.sh@855 -- # local i 00:27:26.523 19:20:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:26.523 19:20:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:26.523 19:20:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:27:26.523 19:20:42 -- common/autotest_common.sh@859 -- # break 00:27:26.523 19:20:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:26.523 19:20:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:26.523 19:20:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:26.523 1+0 records in 00:27:26.523 1+0 records out 00:27:26.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433958 s, 9.4 MB/s 00:27:26.523 19:20:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.523 19:20:42 -- common/autotest_common.sh@872 -- # size=4096 00:27:26.523 19:20:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.523 19:20:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:26.523 19:20:42 -- common/autotest_common.sh@875 -- # return 0 00:27:26.523 19:20:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:26.523 19:20:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:26.523 19:20:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:27:26.782 19:20:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:27:26.782 19:20:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:27:26.782 19:20:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:27:26.782 19:20:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:27:26.782 19:20:42 -- common/autotest_common.sh@855 -- # local i 00:27:26.782 19:20:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:26.782 19:20:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:26.782 19:20:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:27:26.782 19:20:42 -- common/autotest_common.sh@859 -- # break 00:27:26.782 19:20:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:26.782 19:20:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:26.782 19:20:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:26.782 1+0 records in 00:27:26.782 1+0 records out 00:27:26.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000843191 s, 4.9 MB/s 00:27:26.782 19:20:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.782 19:20:42 -- common/autotest_common.sh@872 -- # size=4096 00:27:26.782 19:20:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.782 19:20:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:26.782 19:20:42 -- common/autotest_common.sh@875 -- # return 0 00:27:26.782 19:20:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:26.782 19:20:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:26.782 19:20:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:27:27.040 19:20:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:27:27.040 19:20:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:27:27.040 19:20:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:27:27.040 19:20:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:27:27.040 19:20:42 -- common/autotest_common.sh@855 -- # local i 00:27:27.040 19:20:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:27.040 19:20:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:27.040 19:20:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:27:27.040 19:20:42 -- common/autotest_common.sh@859 -- # break 00:27:27.040 19:20:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:27.040 19:20:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:27.040 19:20:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:27.040 1+0 records in 00:27:27.040 1+0 records out 00:27:27.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497637 s, 8.2 MB/s 00:27:27.040 19:20:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:27.040 19:20:42 -- common/autotest_common.sh@872 -- # size=4096 00:27:27.040 19:20:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:27.040 19:20:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:27.040 19:20:42 -- common/autotest_common.sh@875 -- # return 0 00:27:27.040 19:20:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:27.040 19:20:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:27.040 19:20:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:27:27.608 19:20:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:27:27.608 19:20:43 -- common/autotest_common.sh@855 -- # local i 00:27:27.608 19:20:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:27.608 19:20:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:27.608 19:20:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:27:27.608 19:20:43 -- common/autotest_common.sh@859 -- # break 00:27:27.608 19:20:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:27.608 19:20:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:27.608 19:20:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:27.608 1+0 records in 00:27:27.608 1+0 records out 00:27:27.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000970171 s, 4.2 MB/s 00:27:27.608 19:20:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:27.608 19:20:43 -- common/autotest_common.sh@872 -- # size=4096 00:27:27.608 19:20:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:27.608 19:20:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:27.608 19:20:43 -- common/autotest_common.sh@875 -- # return 0 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:27:27.608 19:20:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:27:27.608 19:20:43 -- common/autotest_common.sh@855 -- # local i 00:27:27.608 19:20:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:27.608 19:20:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:27.608 19:20:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:27:27.608 19:20:43 -- common/autotest_common.sh@859 -- # break 00:27:27.608 19:20:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:27.608 19:20:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:27.608 19:20:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:27.608 1+0 records in 00:27:27.608 1+0 records out 00:27:27.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767812 s, 5.3 MB/s 00:27:27.608 19:20:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:27.608 19:20:43 -- common/autotest_common.sh@872 -- # size=4096 00:27:27.608 19:20:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:27.608 19:20:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:27.608 19:20:43 -- common/autotest_common.sh@875 -- # return 0 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:27.608 19:20:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:27:27.867 19:20:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:27:27.867 19:20:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:27:27.867 19:20:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:27:27.867 19:20:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:27:27.867 19:20:43 -- common/autotest_common.sh@855 -- # local i 00:27:27.867 19:20:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:27.867 19:20:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:27.867 19:20:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:27:27.867 19:20:43 -- common/autotest_common.sh@859 -- # break 00:27:27.867 19:20:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:27.867 19:20:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:27.867 19:20:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:27.867 1+0 records in 00:27:27.867 1+0 records out 00:27:27.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666182 s, 6.1 MB/s 00:27:27.867 19:20:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:27.867 19:20:43 -- common/autotest_common.sh@872 -- # size=4096 00:27:27.867 19:20:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.127 19:20:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:28.127 19:20:43 -- common/autotest_common.sh@875 -- # return 0 00:27:28.127 19:20:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:28.127 19:20:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:28.127 19:20:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:27:28.127 19:20:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:27:28.127 19:20:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:27:28.127 19:20:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:27:28.127 19:20:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:27:28.127 19:20:44 -- common/autotest_common.sh@855 -- # local i 00:27:28.127 19:20:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:28.127 19:20:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:28.127 19:20:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:27:28.127 19:20:44 -- common/autotest_common.sh@859 -- # break 00:27:28.127 19:20:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:28.127 19:20:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:28.127 19:20:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:28.127 1+0 records in 00:27:28.127 1+0 records out 00:27:28.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00283078 s, 1.4 MB/s 00:27:28.127 19:20:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.127 19:20:44 -- common/autotest_common.sh@872 -- # size=4096 00:27:28.127 19:20:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.127 19:20:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:28.127 19:20:44 -- common/autotest_common.sh@875 -- # return 0 00:27:28.127 19:20:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:28.127 19:20:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:28.127 19:20:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:27:28.433 19:20:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:27:28.433 19:20:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:27:28.433 19:20:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:27:28.433 19:20:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:27:28.433 19:20:44 -- common/autotest_common.sh@855 -- # local i 00:27:28.433 19:20:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:28.433 19:20:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:28.433 19:20:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:27:28.433 19:20:44 -- common/autotest_common.sh@859 -- # break 00:27:28.433 19:20:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:28.433 19:20:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:28.433 19:20:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:28.433 1+0 records in 00:27:28.433 1+0 records out 00:27:28.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140508 s, 2.9 MB/s 00:27:28.433 19:20:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.433 19:20:44 -- common/autotest_common.sh@872 -- # size=4096 00:27:28.433 19:20:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.433 19:20:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:28.433 19:20:44 -- common/autotest_common.sh@875 -- # return 0 00:27:28.433 19:20:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:28.433 19:20:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:28.433 19:20:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:27:28.692 19:20:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:27:28.692 19:20:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:27:28.692 19:20:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:27:28.692 19:20:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:27:28.692 19:20:44 -- common/autotest_common.sh@855 -- # local i 00:27:28.692 19:20:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:28.692 19:20:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:28.692 19:20:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:27:28.692 19:20:44 -- common/autotest_common.sh@859 -- # break 00:27:28.692 19:20:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:28.692 19:20:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:28.692 19:20:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:28.692 1+0 records in 00:27:28.692 1+0 records out 00:27:28.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620413 s, 6.6 MB/s 00:27:28.692 19:20:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.692 19:20:44 -- common/autotest_common.sh@872 -- # size=4096 00:27:28.692 19:20:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.692 19:20:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:28.692 19:20:44 -- common/autotest_common.sh@875 -- # return 0 00:27:28.692 19:20:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:28.692 19:20:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:28.692 19:20:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:27:28.949 19:20:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:27:28.949 19:20:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:27:28.949 19:20:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:27:28.949 19:20:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:27:28.949 19:20:44 -- common/autotest_common.sh@855 -- # local i 00:27:28.949 19:20:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:28.949 19:20:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:28.949 19:20:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:27:28.949 19:20:44 -- common/autotest_common.sh@859 -- # break 00:27:28.949 19:20:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:28.949 19:20:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:28.949 19:20:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:28.949 1+0 records in 00:27:28.949 1+0 records out 00:27:28.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073787 s, 5.6 MB/s 00:27:28.949 19:20:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.949 19:20:44 -- common/autotest_common.sh@872 -- # size=4096 00:27:28.949 19:20:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.949 19:20:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:28.949 19:20:44 -- common/autotest_common.sh@875 -- # return 0 00:27:28.949 19:20:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:28.949 19:20:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:28.949 19:20:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:27:29.519 19:20:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:27:29.519 19:20:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:27:29.519 19:20:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:27:29.519 19:20:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:27:29.519 19:20:45 -- common/autotest_common.sh@855 -- # local i 00:27:29.519 19:20:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:29.519 19:20:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:29.519 19:20:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:27:29.519 19:20:45 -- common/autotest_common.sh@859 -- # break 00:27:29.519 19:20:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:29.519 19:20:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:29.519 19:20:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:29.519 1+0 records in 00:27:29.519 1+0 records out 00:27:29.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110424 s, 3.7 MB/s 00:27:29.519 19:20:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.519 19:20:45 -- common/autotest_common.sh@872 -- # size=4096 00:27:29.519 19:20:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.519 19:20:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:29.519 19:20:45 -- common/autotest_common.sh@875 -- # return 0 00:27:29.519 19:20:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:29.519 19:20:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:29.519 19:20:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:27:29.776 19:20:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:27:29.776 19:20:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:27:29.776 19:20:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:27:29.776 19:20:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:27:29.777 19:20:45 -- common/autotest_common.sh@855 -- # local i 00:27:29.777 19:20:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:29.777 19:20:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:29.777 19:20:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:27:29.777 19:20:45 -- common/autotest_common.sh@859 -- # break 00:27:29.777 19:20:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:29.777 19:20:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:29.777 19:20:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:29.777 1+0 records in 00:27:29.777 1+0 records out 00:27:29.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071594 s, 5.7 MB/s 00:27:29.777 19:20:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.777 19:20:45 -- common/autotest_common.sh@872 -- # size=4096 00:27:29.777 19:20:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.777 19:20:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:29.777 19:20:45 -- common/autotest_common.sh@875 -- # return 0 00:27:29.777 19:20:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:29.777 19:20:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:29.777 19:20:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:27:30.035 19:20:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:27:30.035 19:20:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:27:30.035 19:20:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:27:30.035 19:20:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:27:30.035 19:20:45 -- common/autotest_common.sh@855 -- # local i 00:27:30.035 19:20:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:30.035 19:20:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:30.035 19:20:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:27:30.035 19:20:45 -- common/autotest_common.sh@859 -- # break 00:27:30.035 19:20:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:30.035 19:20:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:30.035 19:20:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:30.035 1+0 records in 00:27:30.035 1+0 records out 00:27:30.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101832 s, 4.0 MB/s 00:27:30.035 19:20:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.035 19:20:45 -- common/autotest_common.sh@872 -- # size=4096 00:27:30.035 19:20:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.035 19:20:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:30.035 19:20:45 -- common/autotest_common.sh@875 -- # return 0 00:27:30.035 19:20:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:30.035 19:20:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:27:30.035 19:20:45 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:30.294 19:20:46 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd0", 00:27:30.294 "bdev_name": "Malloc0" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd1", 00:27:30.294 "bdev_name": "Malloc1p0" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd2", 00:27:30.294 "bdev_name": "Malloc1p1" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd3", 00:27:30.294 "bdev_name": "Malloc2p0" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd4", 00:27:30.294 "bdev_name": "Malloc2p1" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd5", 00:27:30.294 "bdev_name": "Malloc2p2" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd6", 00:27:30.294 "bdev_name": "Malloc2p3" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd7", 00:27:30.294 "bdev_name": "Malloc2p4" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd8", 00:27:30.294 "bdev_name": "Malloc2p5" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd9", 00:27:30.294 "bdev_name": "Malloc2p6" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd10", 00:27:30.294 "bdev_name": "Malloc2p7" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd11", 00:27:30.294 "bdev_name": "TestPT" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd12", 00:27:30.294 "bdev_name": "raid0" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd13", 00:27:30.294 "bdev_name": "concat0" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd14", 00:27:30.294 "bdev_name": "raid1" 00:27:30.294 }, 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd15", 00:27:30.294 "bdev_name": "AIO0" 00:27:30.294 } 00:27:30.294 ]' 00:27:30.294 19:20:46 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:30.294 19:20:46 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:30.294 19:20:46 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:30.294 { 00:27:30.294 "nbd_device": "/dev/nbd0", 00:27:30.294 "bdev_name": "Malloc0" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd1", 00:27:30.295 "bdev_name": "Malloc1p0" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd2", 00:27:30.295 "bdev_name": "Malloc1p1" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd3", 00:27:30.295 "bdev_name": "Malloc2p0" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd4", 00:27:30.295 "bdev_name": "Malloc2p1" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd5", 00:27:30.295 "bdev_name": "Malloc2p2" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd6", 00:27:30.295 "bdev_name": "Malloc2p3" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd7", 00:27:30.295 "bdev_name": "Malloc2p4" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd8", 00:27:30.295 "bdev_name": "Malloc2p5" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd9", 00:27:30.295 "bdev_name": "Malloc2p6" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd10", 00:27:30.295 "bdev_name": "Malloc2p7" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd11", 00:27:30.295 "bdev_name": "TestPT" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd12", 00:27:30.295 "bdev_name": "raid0" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd13", 00:27:30.295 "bdev_name": "concat0" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd14", 00:27:30.295 "bdev_name": "raid1" 00:27:30.295 }, 00:27:30.295 { 00:27:30.295 "nbd_device": "/dev/nbd15", 00:27:30.295 "bdev_name": "AIO0" 00:27:30.295 } 00:27:30.295 ]' 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@51 -- # local i 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@41 -- # break 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@45 -- # return 0 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:30.553 19:20:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@41 -- # break 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@45 -- # return 0 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:31.119 19:20:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@41 -- # break 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:31.119 19:20:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@41 -- # break 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:31.376 19:20:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@41 -- # break 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:31.634 19:20:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@41 -- # break 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:31.892 19:20:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@41 -- # break 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@45 -- # return 0 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@41 -- # break 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@45 -- # return 0 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.458 19:20:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@41 -- # break 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@45 -- # return 0 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.716 19:20:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@41 -- # break 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@45 -- # return 0 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.974 19:20:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@41 -- # break 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@45 -- # return 0 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:33.541 19:20:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@41 -- # break 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@45 -- # return 0 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:33.802 19:20:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@41 -- # break 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@45 -- # return 0 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:34.061 19:20:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:27:34.318 19:20:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:27:34.318 19:20:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:27:34.318 19:20:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:27:34.318 19:20:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:34.318 19:20:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:34.319 19:20:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:27:34.319 19:20:50 -- bdev/nbd_common.sh@41 -- # break 00:27:34.319 19:20:50 -- bdev/nbd_common.sh@45 -- # return 0 00:27:34.319 19:20:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:34.319 19:20:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@41 -- # break 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@45 -- # return 0 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:34.577 19:20:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@41 -- # break 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@45 -- # return 0 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:34.836 19:20:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@65 -- # true 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@65 -- # count=0 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@122 -- # count=0 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@127 -- # return 0 00:27:35.095 19:20:50 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@12 -- # local i 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:35.095 19:20:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:35.353 /dev/nbd0 00:27:35.353 19:20:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:35.353 19:20:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:35.353 19:20:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:35.353 19:20:51 -- common/autotest_common.sh@855 -- # local i 00:27:35.353 19:20:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:35.353 19:20:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:35.353 19:20:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:35.353 19:20:51 -- common/autotest_common.sh@859 -- # break 00:27:35.353 19:20:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:35.353 19:20:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:35.353 19:20:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:35.353 1+0 records in 00:27:35.353 1+0 records out 00:27:35.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363287 s, 11.3 MB/s 00:27:35.353 19:20:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.353 19:20:51 -- common/autotest_common.sh@872 -- # size=4096 00:27:35.353 19:20:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.353 19:20:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:35.353 19:20:51 -- common/autotest_common.sh@875 -- # return 0 00:27:35.353 19:20:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:35.353 19:20:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:35.353 19:20:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:27:35.611 /dev/nbd1 00:27:35.611 19:20:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:35.611 19:20:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:35.611 19:20:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:27:35.611 19:20:51 -- common/autotest_common.sh@855 -- # local i 00:27:35.611 19:20:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:35.611 19:20:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:35.611 19:20:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:27:35.611 19:20:51 -- common/autotest_common.sh@859 -- # break 00:27:35.611 19:20:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:35.611 19:20:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:35.611 19:20:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:35.611 1+0 records in 00:27:35.611 1+0 records out 00:27:35.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257666 s, 15.9 MB/s 00:27:35.611 19:20:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.611 19:20:51 -- common/autotest_common.sh@872 -- # size=4096 00:27:35.611 19:20:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.611 19:20:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:35.611 19:20:51 -- common/autotest_common.sh@875 -- # return 0 00:27:35.611 19:20:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:35.611 19:20:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:35.612 19:20:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:27:35.870 /dev/nbd10 00:27:35.870 19:20:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:27:35.870 19:20:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:27:35.870 19:20:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:27:35.870 19:20:51 -- common/autotest_common.sh@855 -- # local i 00:27:35.870 19:20:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:35.870 19:20:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:35.870 19:20:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:27:35.870 19:20:51 -- common/autotest_common.sh@859 -- # break 00:27:35.870 19:20:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:35.870 19:20:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:35.870 19:20:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:35.870 1+0 records in 00:27:35.870 1+0 records out 00:27:35.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126048 s, 3.2 MB/s 00:27:35.870 19:20:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.870 19:20:51 -- common/autotest_common.sh@872 -- # size=4096 00:27:35.870 19:20:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.870 19:20:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:35.870 19:20:51 -- common/autotest_common.sh@875 -- # return 0 00:27:35.870 19:20:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:35.870 19:20:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:35.870 19:20:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:27:36.128 /dev/nbd11 00:27:36.128 19:20:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:27:36.128 19:20:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:27:36.128 19:20:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:27:36.128 19:20:51 -- common/autotest_common.sh@855 -- # local i 00:27:36.128 19:20:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:36.128 19:20:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:36.128 19:20:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:27:36.128 19:20:51 -- common/autotest_common.sh@859 -- # break 00:27:36.128 19:20:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:36.128 19:20:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:36.128 19:20:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.128 1+0 records in 00:27:36.128 1+0 records out 00:27:36.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045126 s, 9.1 MB/s 00:27:36.128 19:20:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.128 19:20:51 -- common/autotest_common.sh@872 -- # size=4096 00:27:36.128 19:20:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.128 19:20:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:36.128 19:20:51 -- common/autotest_common.sh@875 -- # return 0 00:27:36.128 19:20:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.128 19:20:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:36.128 19:20:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:27:36.385 /dev/nbd12 00:27:36.385 19:20:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:27:36.385 19:20:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:27:36.385 19:20:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:27:36.385 19:20:52 -- common/autotest_common.sh@855 -- # local i 00:27:36.385 19:20:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:36.385 19:20:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:36.385 19:20:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:27:36.385 19:20:52 -- common/autotest_common.sh@859 -- # break 00:27:36.385 19:20:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:36.385 19:20:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:36.385 19:20:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.385 1+0 records in 00:27:36.385 1+0 records out 00:27:36.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420144 s, 9.7 MB/s 00:27:36.385 19:20:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.385 19:20:52 -- common/autotest_common.sh@872 -- # size=4096 00:27:36.385 19:20:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.385 19:20:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:36.385 19:20:52 -- common/autotest_common.sh@875 -- # return 0 00:27:36.385 19:20:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.385 19:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:36.385 19:20:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:27:36.643 /dev/nbd13 00:27:36.643 19:20:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:27:36.643 19:20:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:27:36.643 19:20:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:27:36.644 19:20:52 -- common/autotest_common.sh@855 -- # local i 00:27:36.644 19:20:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:36.644 19:20:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:36.644 19:20:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:27:36.644 19:20:52 -- common/autotest_common.sh@859 -- # break 00:27:36.644 19:20:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:36.644 19:20:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:36.644 19:20:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.644 1+0 records in 00:27:36.644 1+0 records out 00:27:36.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041651 s, 9.8 MB/s 00:27:36.644 19:20:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.644 19:20:52 -- common/autotest_common.sh@872 -- # size=4096 00:27:36.644 19:20:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.644 19:20:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:36.644 19:20:52 -- common/autotest_common.sh@875 -- # return 0 00:27:36.644 19:20:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.644 19:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:36.644 19:20:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:27:36.902 /dev/nbd14 00:27:36.902 19:20:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:27:36.902 19:20:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:27:36.902 19:20:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:27:36.902 19:20:52 -- common/autotest_common.sh@855 -- # local i 00:27:36.902 19:20:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:36.902 19:20:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:36.902 19:20:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:27:37.160 19:20:52 -- common/autotest_common.sh@859 -- # break 00:27:37.160 19:20:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:37.160 19:20:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:37.160 19:20:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.160 1+0 records in 00:27:37.160 1+0 records out 00:27:37.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499584 s, 8.2 MB/s 00:27:37.160 19:20:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.160 19:20:52 -- common/autotest_common.sh@872 -- # size=4096 00:27:37.160 19:20:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.160 19:20:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:37.160 19:20:52 -- common/autotest_common.sh@875 -- # return 0 00:27:37.160 19:20:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:37.160 19:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:37.160 19:20:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:27:37.160 /dev/nbd15 00:27:37.160 19:20:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:27:37.160 19:20:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:27:37.160 19:20:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:27:37.160 19:20:53 -- common/autotest_common.sh@855 -- # local i 00:27:37.160 19:20:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:37.160 19:20:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:37.160 19:20:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:27:37.160 19:20:53 -- common/autotest_common.sh@859 -- # break 00:27:37.160 19:20:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:37.160 19:20:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:37.160 19:20:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.160 1+0 records in 00:27:37.160 1+0 records out 00:27:37.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591185 s, 6.9 MB/s 00:27:37.160 19:20:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.160 19:20:53 -- common/autotest_common.sh@872 -- # size=4096 00:27:37.160 19:20:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.160 19:20:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:37.160 19:20:53 -- common/autotest_common.sh@875 -- # return 0 00:27:37.160 19:20:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:37.160 19:20:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:37.160 19:20:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:27:37.423 /dev/nbd2 00:27:37.691 19:20:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:27:37.691 19:20:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:27:37.691 19:20:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:27:37.691 19:20:53 -- common/autotest_common.sh@855 -- # local i 00:27:37.691 19:20:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:37.691 19:20:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:37.691 19:20:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:27:37.691 19:20:53 -- common/autotest_common.sh@859 -- # break 00:27:37.691 19:20:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:37.691 19:20:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:37.691 19:20:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.691 1+0 records in 00:27:37.691 1+0 records out 00:27:37.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439081 s, 9.3 MB/s 00:27:37.691 19:20:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.691 19:20:53 -- common/autotest_common.sh@872 -- # size=4096 00:27:37.691 19:20:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.691 19:20:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:37.691 19:20:53 -- common/autotest_common.sh@875 -- # return 0 00:27:37.691 19:20:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:37.691 19:20:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:37.691 19:20:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:27:37.691 /dev/nbd3 00:27:37.948 19:20:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:27:37.948 19:20:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:27:37.948 19:20:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:27:37.948 19:20:53 -- common/autotest_common.sh@855 -- # local i 00:27:37.948 19:20:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:37.948 19:20:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:37.948 19:20:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:27:37.948 19:20:53 -- common/autotest_common.sh@859 -- # break 00:27:37.948 19:20:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:37.948 19:20:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:37.948 19:20:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.948 1+0 records in 00:27:37.948 1+0 records out 00:27:37.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623025 s, 6.6 MB/s 00:27:37.948 19:20:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.948 19:20:53 -- common/autotest_common.sh@872 -- # size=4096 00:27:37.948 19:20:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.948 19:20:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:37.948 19:20:53 -- common/autotest_common.sh@875 -- # return 0 00:27:37.948 19:20:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:37.948 19:20:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:37.948 19:20:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:27:37.948 /dev/nbd4 00:27:38.206 19:20:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:27:38.206 19:20:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:27:38.206 19:20:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:27:38.206 19:20:53 -- common/autotest_common.sh@855 -- # local i 00:27:38.206 19:20:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:38.206 19:20:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:38.206 19:20:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:27:38.206 19:20:53 -- common/autotest_common.sh@859 -- # break 00:27:38.206 19:20:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:38.206 19:20:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:38.206 19:20:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:38.206 1+0 records in 00:27:38.206 1+0 records out 00:27:38.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760998 s, 5.4 MB/s 00:27:38.206 19:20:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:38.206 19:20:53 -- common/autotest_common.sh@872 -- # size=4096 00:27:38.206 19:20:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:38.206 19:20:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:38.206 19:20:53 -- common/autotest_common.sh@875 -- # return 0 00:27:38.206 19:20:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:38.206 19:20:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:38.206 19:20:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:27:38.463 /dev/nbd5 00:27:38.464 19:20:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:27:38.464 19:20:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:27:38.464 19:20:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:27:38.464 19:20:54 -- common/autotest_common.sh@855 -- # local i 00:27:38.464 19:20:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:38.464 19:20:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:38.464 19:20:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:27:38.464 19:20:54 -- common/autotest_common.sh@859 -- # break 00:27:38.464 19:20:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:38.464 19:20:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:38.464 19:20:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:38.464 1+0 records in 00:27:38.464 1+0 records out 00:27:38.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108187 s, 3.8 MB/s 00:27:38.464 19:20:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:38.464 19:20:54 -- common/autotest_common.sh@872 -- # size=4096 00:27:38.464 19:20:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:38.464 19:20:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:38.464 19:20:54 -- common/autotest_common.sh@875 -- # return 0 00:27:38.464 19:20:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:38.464 19:20:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:38.464 19:20:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:27:38.722 /dev/nbd6 00:27:38.722 19:20:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:27:38.722 19:20:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:27:38.722 19:20:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:27:38.722 19:20:54 -- common/autotest_common.sh@855 -- # local i 00:27:38.722 19:20:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:38.722 19:20:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:38.722 19:20:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:27:38.722 19:20:54 -- common/autotest_common.sh@859 -- # break 00:27:38.722 19:20:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:38.722 19:20:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:38.722 19:20:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:38.722 1+0 records in 00:27:38.722 1+0 records out 00:27:38.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124958 s, 3.3 MB/s 00:27:38.722 19:20:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:38.722 19:20:54 -- common/autotest_common.sh@872 -- # size=4096 00:27:38.722 19:20:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:38.722 19:20:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:38.722 19:20:54 -- common/autotest_common.sh@875 -- # return 0 00:27:38.722 19:20:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:38.722 19:20:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:38.722 19:20:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:27:38.980 /dev/nbd7 00:27:38.980 19:20:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:27:38.980 19:20:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:27:38.980 19:20:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:27:38.980 19:20:54 -- common/autotest_common.sh@855 -- # local i 00:27:38.980 19:20:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:38.980 19:20:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:38.980 19:20:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:27:38.980 19:20:54 -- common/autotest_common.sh@859 -- # break 00:27:38.980 19:20:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:38.980 19:20:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:38.980 19:20:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:38.980 1+0 records in 00:27:38.980 1+0 records out 00:27:38.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000962418 s, 4.3 MB/s 00:27:38.980 19:20:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:38.980 19:20:54 -- common/autotest_common.sh@872 -- # size=4096 00:27:38.980 19:20:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:38.980 19:20:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:38.980 19:20:54 -- common/autotest_common.sh@875 -- # return 0 00:27:38.980 19:20:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:38.980 19:20:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:38.980 19:20:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:27:39.238 /dev/nbd8 00:27:39.238 19:20:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:27:39.238 19:20:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:27:39.238 19:20:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:27:39.238 19:20:55 -- common/autotest_common.sh@855 -- # local i 00:27:39.238 19:20:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:39.238 19:20:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:39.238 19:20:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:27:39.238 19:20:55 -- common/autotest_common.sh@859 -- # break 00:27:39.238 19:20:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:39.238 19:20:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:39.238 19:20:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:39.238 1+0 records in 00:27:39.238 1+0 records out 00:27:39.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107016 s, 3.8 MB/s 00:27:39.238 19:20:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:39.238 19:20:55 -- common/autotest_common.sh@872 -- # size=4096 00:27:39.238 19:20:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:39.238 19:20:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:39.238 19:20:55 -- common/autotest_common.sh@875 -- # return 0 00:27:39.238 19:20:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:39.238 19:20:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:39.238 19:20:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:27:39.496 /dev/nbd9 00:27:39.496 19:20:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:27:39.496 19:20:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:27:39.496 19:20:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:27:39.496 19:20:55 -- common/autotest_common.sh@855 -- # local i 00:27:39.496 19:20:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:39.496 19:20:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:39.496 19:20:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:27:39.496 19:20:55 -- common/autotest_common.sh@859 -- # break 00:27:39.496 19:20:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:39.496 19:20:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:39.496 19:20:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:39.496 1+0 records in 00:27:39.496 1+0 records out 00:27:39.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126438 s, 3.2 MB/s 00:27:39.496 19:20:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:39.496 19:20:55 -- common/autotest_common.sh@872 -- # size=4096 00:27:39.496 19:20:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:39.496 19:20:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:39.496 19:20:55 -- common/autotest_common.sh@875 -- # return 0 00:27:39.496 19:20:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:39.496 19:20:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:27:39.496 19:20:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:39.496 19:20:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:39.496 19:20:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:40.062 19:20:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd0", 00:27:40.062 "bdev_name": "Malloc0" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd1", 00:27:40.062 "bdev_name": "Malloc1p0" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd10", 00:27:40.062 "bdev_name": "Malloc1p1" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd11", 00:27:40.062 "bdev_name": "Malloc2p0" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd12", 00:27:40.062 "bdev_name": "Malloc2p1" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd13", 00:27:40.062 "bdev_name": "Malloc2p2" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd14", 00:27:40.062 "bdev_name": "Malloc2p3" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd15", 00:27:40.062 "bdev_name": "Malloc2p4" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd2", 00:27:40.062 "bdev_name": "Malloc2p5" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd3", 00:27:40.062 "bdev_name": "Malloc2p6" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd4", 00:27:40.062 "bdev_name": "Malloc2p7" 00:27:40.062 }, 00:27:40.062 { 00:27:40.062 "nbd_device": "/dev/nbd5", 00:27:40.063 "bdev_name": "TestPT" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd6", 00:27:40.063 "bdev_name": "raid0" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd7", 00:27:40.063 "bdev_name": "concat0" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd8", 00:27:40.063 "bdev_name": "raid1" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd9", 00:27:40.063 "bdev_name": "AIO0" 00:27:40.063 } 00:27:40.063 ]' 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd0", 00:27:40.063 "bdev_name": "Malloc0" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd1", 00:27:40.063 "bdev_name": "Malloc1p0" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd10", 00:27:40.063 "bdev_name": "Malloc1p1" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd11", 00:27:40.063 "bdev_name": "Malloc2p0" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd12", 00:27:40.063 "bdev_name": "Malloc2p1" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd13", 00:27:40.063 "bdev_name": "Malloc2p2" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd14", 00:27:40.063 "bdev_name": "Malloc2p3" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd15", 00:27:40.063 "bdev_name": "Malloc2p4" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd2", 00:27:40.063 "bdev_name": "Malloc2p5" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd3", 00:27:40.063 "bdev_name": "Malloc2p6" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd4", 00:27:40.063 "bdev_name": "Malloc2p7" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd5", 00:27:40.063 "bdev_name": "TestPT" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd6", 00:27:40.063 "bdev_name": "raid0" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd7", 00:27:40.063 "bdev_name": "concat0" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd8", 00:27:40.063 "bdev_name": "raid1" 00:27:40.063 }, 00:27:40.063 { 00:27:40.063 "nbd_device": "/dev/nbd9", 00:27:40.063 "bdev_name": "AIO0" 00:27:40.063 } 00:27:40.063 ]' 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:40.063 /dev/nbd1 00:27:40.063 /dev/nbd10 00:27:40.063 /dev/nbd11 00:27:40.063 /dev/nbd12 00:27:40.063 /dev/nbd13 00:27:40.063 /dev/nbd14 00:27:40.063 /dev/nbd15 00:27:40.063 /dev/nbd2 00:27:40.063 /dev/nbd3 00:27:40.063 /dev/nbd4 00:27:40.063 /dev/nbd5 00:27:40.063 /dev/nbd6 00:27:40.063 /dev/nbd7 00:27:40.063 /dev/nbd8 00:27:40.063 /dev/nbd9' 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:40.063 /dev/nbd1 00:27:40.063 /dev/nbd10 00:27:40.063 /dev/nbd11 00:27:40.063 /dev/nbd12 00:27:40.063 /dev/nbd13 00:27:40.063 /dev/nbd14 00:27:40.063 /dev/nbd15 00:27:40.063 /dev/nbd2 00:27:40.063 /dev/nbd3 00:27:40.063 /dev/nbd4 00:27:40.063 /dev/nbd5 00:27:40.063 /dev/nbd6 00:27:40.063 /dev/nbd7 00:27:40.063 /dev/nbd8 00:27:40.063 /dev/nbd9' 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@65 -- # count=16 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@66 -- # echo 16 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@95 -- # count=16 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:40.063 256+0 records in 00:27:40.063 256+0 records out 00:27:40.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532861 s, 197 MB/s 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:40.063 256+0 records in 00:27:40.063 256+0 records out 00:27:40.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18359 s, 5.7 MB/s 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:40.063 19:20:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:40.321 256+0 records in 00:27:40.321 256+0 records out 00:27:40.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189802 s, 5.5 MB/s 00:27:40.321 19:20:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:40.321 19:20:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:27:40.579 256+0 records in 00:27:40.579 256+0 records out 00:27:40.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182201 s, 5.8 MB/s 00:27:40.579 19:20:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:40.579 19:20:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:27:40.837 256+0 records in 00:27:40.837 256+0 records out 00:27:40.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179706 s, 5.8 MB/s 00:27:40.837 19:20:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:40.837 19:20:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:27:40.837 256+0 records in 00:27:40.837 256+0 records out 00:27:40.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179266 s, 5.8 MB/s 00:27:40.837 19:20:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:40.837 19:20:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:27:41.095 256+0 records in 00:27:41.095 256+0 records out 00:27:41.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180384 s, 5.8 MB/s 00:27:41.095 19:20:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:41.095 19:20:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:27:41.353 256+0 records in 00:27:41.353 256+0 records out 00:27:41.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183251 s, 5.7 MB/s 00:27:41.353 19:20:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:41.353 19:20:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:27:41.611 256+0 records in 00:27:41.611 256+0 records out 00:27:41.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214736 s, 4.9 MB/s 00:27:41.612 19:20:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:41.612 19:20:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:27:41.869 256+0 records in 00:27:41.869 256+0 records out 00:27:41.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247444 s, 4.2 MB/s 00:27:41.869 19:20:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:41.869 19:20:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:27:41.869 256+0 records in 00:27:41.869 256+0 records out 00:27:41.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181205 s, 5.8 MB/s 00:27:41.869 19:20:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:41.869 19:20:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:27:42.127 256+0 records in 00:27:42.127 256+0 records out 00:27:42.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18573 s, 5.6 MB/s 00:27:42.127 19:20:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:42.127 19:20:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:27:42.421 256+0 records in 00:27:42.421 256+0 records out 00:27:42.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190137 s, 5.5 MB/s 00:27:42.421 19:20:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:42.421 19:20:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:27:42.421 256+0 records in 00:27:42.421 256+0 records out 00:27:42.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18492 s, 5.7 MB/s 00:27:42.421 19:20:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:42.421 19:20:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:27:42.680 256+0 records in 00:27:42.680 256+0 records out 00:27:42.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169146 s, 6.2 MB/s 00:27:42.680 19:20:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:42.680 19:20:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:27:42.939 256+0 records in 00:27:42.940 256+0 records out 00:27:42.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186867 s, 5.6 MB/s 00:27:42.940 19:20:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:42.940 19:20:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:27:43.198 256+0 records in 00:27:43.198 256+0 records out 00:27:43.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.292644 s, 3.6 MB/s 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.198 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@51 -- # local i 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:43.457 19:20:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@41 -- # break 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@45 -- # return 0 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:43.715 19:20:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:43.973 19:20:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:43.973 19:20:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:43.973 19:20:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:43.973 19:20:59 -- bdev/nbd_common.sh@41 -- # break 00:27:43.973 19:20:59 -- bdev/nbd_common.sh@45 -- # return 0 00:27:43.973 19:20:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:43.973 19:20:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:27:44.232 19:20:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:27:44.232 19:20:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:27:44.232 19:20:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:27:44.232 19:20:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.232 19:20:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.232 19:20:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:27:44.232 19:21:00 -- bdev/nbd_common.sh@41 -- # break 00:27:44.232 19:21:00 -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.232 19:21:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.232 19:21:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:27:44.490 19:21:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:27:44.491 19:21:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:27:44.491 19:21:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:27:44.491 19:21:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.491 19:21:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.491 19:21:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:27:44.491 19:21:00 -- bdev/nbd_common.sh@41 -- # break 00:27:44.491 19:21:00 -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.491 19:21:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.491 19:21:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@41 -- # break 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.748 19:21:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@41 -- # break 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@45 -- # return 0 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:45.005 19:21:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@41 -- # break 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@45 -- # return 0 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:45.263 19:21:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@41 -- # break 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@45 -- # return 0 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:45.525 19:21:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@41 -- # break 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@45 -- # return 0 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:45.790 19:21:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@41 -- # break 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@45 -- # return 0 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:46.059 19:21:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:27:46.322 19:21:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:27:46.322 19:21:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:27:46.323 19:21:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:27:46.323 19:21:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:46.323 19:21:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:46.323 19:21:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:27:46.323 19:21:02 -- bdev/nbd_common.sh@41 -- # break 00:27:46.581 19:21:02 -- bdev/nbd_common.sh@45 -- # return 0 00:27:46.581 19:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:46.581 19:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:27:46.840 19:21:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:27:46.840 19:21:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:27:46.840 19:21:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:27:46.841 19:21:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:46.841 19:21:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:46.841 19:21:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:27:46.841 19:21:02 -- bdev/nbd_common.sh@41 -- # break 00:27:46.841 19:21:02 -- bdev/nbd_common.sh@45 -- # return 0 00:27:46.841 19:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:46.841 19:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@41 -- # break 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@45 -- # return 0 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:47.098 19:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@41 -- # break 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:47.357 19:21:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@41 -- # break 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:47.615 19:21:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@41 -- # break 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:47.883 19:21:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@65 -- # true 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@65 -- # count=0 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@104 -- # count=0 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:48.168 19:21:03 -- bdev/nbd_common.sh@109 -- # return 0 00:27:48.168 19:21:03 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:27:48.169 19:21:03 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:48.169 19:21:03 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:27:48.169 19:21:03 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:48.169 19:21:03 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:48.169 19:21:03 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:48.426 malloc_lvol_verify 00:27:48.426 19:21:04 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:48.684 a024c911-209c-4199-8fb5-dd9e7e58ceae 00:27:48.684 19:21:04 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:48.942 ef4e7457-770c-42f8-8c53-7c8c932fb738 00:27:48.942 19:21:04 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:49.200 /dev/nbd0 00:27:49.200 19:21:05 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:49.200 mke2fs 1.45.5 (07-Jan-2020) 00:27:49.200 00:27:49.200 Filesystem too small for a journal 00:27:49.200 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:49.200 00:27:49.200 Allocating group tables: 0/1 done 00:27:49.200 Writing inode tables: 0/1 done 00:27:49.200 Writing superblocks and filesystem accounting information: 0/1 done 00:27:49.200 00:27:49.200 19:21:05 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:49.200 19:21:05 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:49.200 19:21:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:49.200 19:21:05 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:49.200 19:21:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:49.200 19:21:05 -- bdev/nbd_common.sh@51 -- # local i 00:27:49.200 19:21:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:49.200 19:21:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@41 -- # break 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@45 -- # return 0 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:49.458 19:21:05 -- bdev/nbd_common.sh@147 -- # return 0 00:27:49.458 19:21:05 -- bdev/blockdev.sh@326 -- # killprocess 117021 00:27:49.458 19:21:05 -- common/autotest_common.sh@936 -- # '[' -z 117021 ']' 00:27:49.458 19:21:05 -- common/autotest_common.sh@940 -- # kill -0 117021 00:27:49.458 19:21:05 -- common/autotest_common.sh@941 -- # uname 00:27:49.458 19:21:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:49.458 19:21:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117021 00:27:49.458 killing process with pid 117021 00:27:49.458 19:21:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:49.458 19:21:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:49.458 19:21:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117021' 00:27:49.458 19:21:05 -- common/autotest_common.sh@955 -- # kill 117021 00:27:49.458 19:21:05 -- common/autotest_common.sh@960 -- # wait 117021 00:27:51.991 ************************************ 00:27:51.991 END TEST bdev_nbd 00:27:51.991 ************************************ 00:27:51.991 19:21:07 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:27:51.991 00:27:51.991 real 0m28.155s 00:27:51.991 user 0m36.375s 00:27:51.991 sys 0m11.386s 00:27:51.991 19:21:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:51.991 19:21:07 -- common/autotest_common.sh@10 -- # set +x 00:27:52.249 19:21:07 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:27:52.249 19:21:07 -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:27:52.249 19:21:07 -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:27:52.249 19:21:07 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:27:52.249 19:21:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:52.249 19:21:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:52.249 19:21:07 -- common/autotest_common.sh@10 -- # set +x 00:27:52.249 ************************************ 00:27:52.249 START TEST bdev_fio 00:27:52.249 ************************************ 00:27:52.249 19:21:08 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@331 -- # local env_context 00:27:52.249 19:21:08 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:27:52.249 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:27:52.249 19:21:08 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:27:52.249 19:21:08 -- bdev/blockdev.sh@339 -- # echo '' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:27:52.249 19:21:08 -- bdev/blockdev.sh@339 -- # env_context= 00:27:52.249 19:21:08 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:27:52.249 19:21:08 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:52.249 19:21:08 -- common/autotest_common.sh@1267 -- # local workload=verify 00:27:52.249 19:21:08 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:27:52.249 19:21:08 -- common/autotest_common.sh@1269 -- # local env_context= 00:27:52.249 19:21:08 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:27:52.249 19:21:08 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:52.249 19:21:08 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:27:52.249 19:21:08 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:27:52.249 19:21:08 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:52.249 19:21:08 -- common/autotest_common.sh@1287 -- # cat 00:27:52.249 19:21:08 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:27:52.249 19:21:08 -- common/autotest_common.sh@1300 -- # cat 00:27:52.249 19:21:08 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:27:52.249 19:21:08 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:27:52.249 19:21:08 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:27:52.249 19:21:08 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:27:52.249 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.249 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:27:52.249 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.249 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:27:52.249 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.249 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:27:52.249 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.249 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:27:52.249 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.249 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:27:52.249 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.249 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:27:52.249 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.249 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:27:52.249 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.249 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:27:52.249 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.249 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:27:52.249 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:27:52.250 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.250 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:27:52.250 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:27:52.250 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.250 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:27:52.250 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:27:52.250 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.250 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:27:52.250 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:27:52.250 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.250 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:27:52.250 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:27:52.250 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.250 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:27:52.250 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:27:52.250 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.250 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:27:52.250 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:27:52.250 19:21:08 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:52.250 19:21:08 -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:27:52.250 19:21:08 -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:27:52.250 19:21:08 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:27:52.250 19:21:08 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:52.250 19:21:08 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:27:52.250 19:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:52.250 19:21:08 -- common/autotest_common.sh@10 -- # set +x 00:27:52.250 ************************************ 00:27:52.250 START TEST bdev_fio_rw_verify 00:27:52.250 ************************************ 00:27:52.250 19:21:08 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:52.250 19:21:08 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:52.250 19:21:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:52.250 19:21:08 -- common/autotest_common.sh@1325 -- # sanitizers=(libasan libclang_rt.asan) 00:27:52.250 19:21:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:52.250 19:21:08 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:52.250 19:21:08 -- common/autotest_common.sh@1327 -- # shift 00:27:52.250 19:21:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:52.250 19:21:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.510 19:21:08 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:52.510 19:21:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:52.510 19:21:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:52.510 19:21:08 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:27:52.510 19:21:08 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:27:52.510 19:21:08 -- common/autotest_common.sh@1333 -- # break 00:27:52.510 19:21:08 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:52.510 19:21:08 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:52.510 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:52.510 fio-3.35 00:27:52.510 Starting 16 threads 00:28:04.721 00:28:04.721 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=118291: Thu Apr 18 19:21:20 2024 00:28:04.721 read: IOPS=58.1k, BW=227MiB/s (238MB/s)(2270MiB/10004msec) 00:28:04.721 slat (usec): min=2, max=37490, avg=49.79, stdev=497.52 00:28:04.722 clat (usec): min=8, max=44494, avg=382.69, stdev=1417.48 00:28:04.722 lat (usec): min=25, max=44551, avg=432.49, stdev=1502.22 00:28:04.722 clat percentiles (usec): 00:28:04.722 | 50.000th=[ 235], 99.000th=[ 1450], 99.900th=[16581], 99.990th=[28705], 00:28:04.722 | 99.999th=[44303] 00:28:04.722 write: IOPS=91.8k, BW=359MiB/s (376MB/s)(3537MiB/9867msec); 0 zone resets 00:28:04.722 slat (usec): min=6, max=42681, avg=87.18, stdev=714.07 00:28:04.722 clat (usec): min=8, max=44318, avg=507.78, stdev=1700.66 00:28:04.722 lat (usec): min=43, max=44385, avg=594.96, stdev=1843.78 00:28:04.722 clat percentiles (usec): 00:28:04.722 | 50.000th=[ 297], 99.000th=[10552], 99.900th=[20055], 99.990th=[30016], 00:28:04.722 | 99.999th=[39584] 00:28:04.722 bw ( KiB/s): min=226921, max=586896, per=98.57%, avg=361882.74, stdev=6112.31, samples=304 00:28:04.722 iops : min=56730, max=146724, avg=90470.47, stdev=1528.08, samples=304 00:28:04.722 lat (usec) : 10=0.01%, 20=0.01%, 50=0.27%, 100=5.04%, 250=39.78% 00:28:04.722 lat (usec) : 500=45.37%, 750=6.69%, 1000=0.85% 00:28:04.722 lat (msec) : 2=0.73%, 4=0.09%, 10=0.22%, 20=0.87%, 50=0.09% 00:28:04.722 cpu : usr=57.48%, sys=1.99%, ctx=211692, majf=0, minf=62894 00:28:04.722 IO depths : 1=11.6%, 2=24.5%, 4=51.1%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.722 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.722 issued rwts: total=581054,905580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.722 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:04.722 00:28:04.722 Run status group 0 (all jobs): 00:28:04.722 READ: bw=227MiB/s (238MB/s), 227MiB/s-227MiB/s (238MB/s-238MB/s), io=2270MiB (2380MB), run=10004-10004msec 00:28:04.722 WRITE: bw=359MiB/s (376MB/s), 359MiB/s-359MiB/s (376MB/s-376MB/s), io=3537MiB (3709MB), run=9867-9867msec 00:28:08.007 ----------------------------------------------------- 00:28:08.007 Suppressions used: 00:28:08.007 count bytes template 00:28:08.007 16 140 /usr/src/fio/parse.c 00:28:08.007 11036 1059456 /usr/src/fio/iolog.c 00:28:08.007 2 596 libcrypto.so 00:28:08.007 ----------------------------------------------------- 00:28:08.007 00:28:08.007 00:28:08.007 real 0m15.559s 00:28:08.007 user 1m39.370s 00:28:08.007 sys 0m4.415s 00:28:08.007 19:21:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:08.007 19:21:23 -- common/autotest_common.sh@10 -- # set +x 00:28:08.007 ************************************ 00:28:08.007 END TEST bdev_fio_rw_verify 00:28:08.007 ************************************ 00:28:08.007 19:21:23 -- bdev/blockdev.sh@350 -- # rm -f 00:28:08.007 19:21:23 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:08.008 19:21:23 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:28:08.008 19:21:23 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:08.008 19:21:23 -- common/autotest_common.sh@1267 -- # local workload=trim 00:28:08.008 19:21:23 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:28:08.008 19:21:23 -- common/autotest_common.sh@1269 -- # local env_context= 00:28:08.008 19:21:23 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:28:08.008 19:21:23 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:08.008 19:21:23 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:28:08.008 19:21:23 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:28:08.008 19:21:23 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:08.008 19:21:23 -- common/autotest_common.sh@1287 -- # cat 00:28:08.008 19:21:23 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:28:08.008 19:21:23 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:28:08.008 19:21:23 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:28:08.008 19:21:23 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:08.009 19:21:23 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "dadf98a7-65c1-4723-983e-8da8d05d48f2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "dadf98a7-65c1-4723-983e-8da8d05d48f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ba679c4a-180f-5a92-aca7-776982b68bd2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ba679c4a-180f-5a92-aca7-776982b68bd2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "63fb2c7b-57e5-5fd7-9a8c-3ae88cce2bc0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "63fb2c7b-57e5-5fd7-9a8c-3ae88cce2bc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "e53c50ed-ae7b-5871-9edc-8ae9472d4bdd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e53c50ed-ae7b-5871-9edc-8ae9472d4bdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "20b241fb-2131-58ac-a02a-42e1a8a4af01"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20b241fb-2131-58ac-a02a-42e1a8a4af01",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f4d44533-0704-541f-a2a2-5ba45f2d2557"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f4d44533-0704-541f-a2a2-5ba45f2d2557",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "dc5105fa-821c-5461-9814-fe8ee16bb0b2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dc5105fa-821c-5461-9814-fe8ee16bb0b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e99a37b4-4e2b-5ed5-880f-f6342e6900bb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e99a37b4-4e2b-5ed5-880f-f6342e6900bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "ffe32405-bae3-544f-8cf6-b8488ae72079"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ffe32405-bae3-544f-8cf6-b8488ae72079",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "8a0f05cf-7ae7-5128-a9e8-2a34f5912105"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8a0f05cf-7ae7-5128-a9e8-2a34f5912105",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0422e0b9-bedd-57f2-9157-4caf686f556e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0422e0b9-bedd-57f2-9157-4caf686f556e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "97e95f00-dfa1-5f3b-a65d-899ef5bdd73a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "97e95f00-dfa1-5f3b-a65d-899ef5bdd73a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "8a6767a6-ed2c-4be0-b1f4-414d69803396"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8a6767a6-ed2c-4be0-b1f4-414d69803396",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8a6767a6-ed2c-4be0-b1f4-414d69803396",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "9cd96815-af1e-4707-a06e-5a804b3a447f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "4819b71c-3a88-4c5e-92d9-631a07ccdf82",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b702b067-b7b6-4614-bcba-4112c5b421e9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b702b067-b7b6-4614-bcba-4112c5b421e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b702b067-b7b6-4614-bcba-4112c5b421e9",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "db5e1962-fbfb-4f16-838f-fbba9281c8b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "16336309-a6e2-48cf-a365-c2ff69474d48",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "28b3d2ae-a464-4566-975c-98b3f93d1f91"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "28b3d2ae-a464-4566-975c-98b3f93d1f91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "28b3d2ae-a464-4566-975c-98b3f93d1f91",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "902b37f4-83d4-4ced-99c2-a189470e8114",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "85524745-491f-4506-b55a-9e3ff3e5caa7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b3790576-8a27-48bb-86c4-673bba9a0bf1"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b3790576-8a27-48bb-86c4-673bba9a0bf1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:28:08.009 19:21:23 -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:28:08.009 Malloc1p0 00:28:08.009 Malloc1p1 00:28:08.009 Malloc2p0 00:28:08.009 Malloc2p1 00:28:08.009 Malloc2p2 00:28:08.009 Malloc2p3 00:28:08.009 Malloc2p4 00:28:08.009 Malloc2p5 00:28:08.009 Malloc2p6 00:28:08.009 Malloc2p7 00:28:08.009 TestPT 00:28:08.009 raid0 00:28:08.009 concat0 ]] 00:28:08.009 19:21:23 -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "dadf98a7-65c1-4723-983e-8da8d05d48f2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "dadf98a7-65c1-4723-983e-8da8d05d48f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ba679c4a-180f-5a92-aca7-776982b68bd2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ba679c4a-180f-5a92-aca7-776982b68bd2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "63fb2c7b-57e5-5fd7-9a8c-3ae88cce2bc0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "63fb2c7b-57e5-5fd7-9a8c-3ae88cce2bc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "e53c50ed-ae7b-5871-9edc-8ae9472d4bdd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e53c50ed-ae7b-5871-9edc-8ae9472d4bdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "20b241fb-2131-58ac-a02a-42e1a8a4af01"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20b241fb-2131-58ac-a02a-42e1a8a4af01",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f4d44533-0704-541f-a2a2-5ba45f2d2557"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f4d44533-0704-541f-a2a2-5ba45f2d2557",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "dc5105fa-821c-5461-9814-fe8ee16bb0b2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dc5105fa-821c-5461-9814-fe8ee16bb0b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e99a37b4-4e2b-5ed5-880f-f6342e6900bb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e99a37b4-4e2b-5ed5-880f-f6342e6900bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "ffe32405-bae3-544f-8cf6-b8488ae72079"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ffe32405-bae3-544f-8cf6-b8488ae72079",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "8a0f05cf-7ae7-5128-a9e8-2a34f5912105"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8a0f05cf-7ae7-5128-a9e8-2a34f5912105",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0422e0b9-bedd-57f2-9157-4caf686f556e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0422e0b9-bedd-57f2-9157-4caf686f556e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "97e95f00-dfa1-5f3b-a65d-899ef5bdd73a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "97e95f00-dfa1-5f3b-a65d-899ef5bdd73a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "8a6767a6-ed2c-4be0-b1f4-414d69803396"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8a6767a6-ed2c-4be0-b1f4-414d69803396",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8a6767a6-ed2c-4be0-b1f4-414d69803396",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "9cd96815-af1e-4707-a06e-5a804b3a447f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "4819b71c-3a88-4c5e-92d9-631a07ccdf82",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b702b067-b7b6-4614-bcba-4112c5b421e9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b702b067-b7b6-4614-bcba-4112c5b421e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b702b067-b7b6-4614-bcba-4112c5b421e9",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "db5e1962-fbfb-4f16-838f-fbba9281c8b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "16336309-a6e2-48cf-a365-c2ff69474d48",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "28b3d2ae-a464-4566-975c-98b3f93d1f91"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "28b3d2ae-a464-4566-975c-98b3f93d1f91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "28b3d2ae-a464-4566-975c-98b3f93d1f91",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "902b37f4-83d4-4ced-99c2-a189470e8114",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "85524745-491f-4506-b55a-9e3ff3e5caa7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b3790576-8a27-48bb-86c4-673bba9a0bf1"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b3790576-8a27-48bb-86c4-673bba9a0bf1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:28:08.010 19:21:23 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:08.010 19:21:23 -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:28:08.010 19:21:23 -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:28:08.010 19:21:23 -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:08.010 19:21:23 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:28:08.010 19:21:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:08.010 19:21:23 -- common/autotest_common.sh@10 -- # set +x 00:28:08.273 ************************************ 00:28:08.273 START TEST bdev_fio_trim 00:28:08.273 ************************************ 00:28:08.273 19:21:23 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:08.273 19:21:23 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:08.273 19:21:23 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:08.273 19:21:23 -- common/autotest_common.sh@1325 -- # sanitizers=(libasan libclang_rt.asan) 00:28:08.273 19:21:23 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:08.273 19:21:23 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:08.273 19:21:23 -- common/autotest_common.sh@1327 -- # shift 00:28:08.273 19:21:23 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:08.273 19:21:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.273 19:21:23 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:08.273 19:21:23 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:08.273 19:21:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:08.273 19:21:23 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:28:08.273 19:21:23 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:28:08.273 19:21:23 -- common/autotest_common.sh@1333 -- # break 00:28:08.273 19:21:23 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:08.273 19:21:23 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:08.273 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:08.274 fio-3.35 00:28:08.274 Starting 14 threads 00:28:20.479 00:28:20.479 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=118552: Thu Apr 18 19:21:36 2024 00:28:20.479 write: IOPS=103k, BW=402MiB/s (421MB/s)(4020MiB/10001msec); 0 zone resets 00:28:20.479 slat (usec): min=2, max=44094, avg=49.11, stdev=452.08 00:28:20.479 clat (usec): min=27, max=28387, avg=330.13, stdev=1192.17 00:28:20.479 lat (usec): min=38, max=44470, avg=379.25, stdev=1274.60 00:28:20.479 clat percentiles (usec): 00:28:20.479 | 50.000th=[ 227], 99.000th=[ 840], 99.900th=[16319], 99.990th=[21627], 00:28:20.479 | 99.999th=[28181] 00:28:20.479 bw ( KiB/s): min=288320, max=570672, per=100.00%, avg=411956.68, stdev=6210.69, samples=266 00:28:20.479 iops : min=72080, max=142668, avg=102989.16, stdev=1552.67, samples=266 00:28:20.479 trim: IOPS=103k, BW=402MiB/s (421MB/s)(4020MiB/10001msec); 0 zone resets 00:28:20.479 slat (usec): min=4, max=28050, avg=34.55, stdev=385.73 00:28:20.479 clat (usec): min=7, max=44471, avg=377.43, stdev=1271.14 00:28:20.479 lat (usec): min=17, max=44498, avg=411.98, stdev=1328.41 00:28:20.479 clat percentiles (usec): 00:28:20.479 | 50.000th=[ 262], 99.000th=[ 996], 99.900th=[16450], 99.990th=[23725], 00:28:20.479 | 99.999th=[28181] 00:28:20.479 bw ( KiB/s): min=288320, max=570680, per=100.00%, avg=411956.68, stdev=6210.84, samples=266 00:28:20.479 iops : min=72080, max=142670, avg=102989.16, stdev=1552.71, samples=266 00:28:20.479 lat (usec) : 10=0.01%, 20=0.01%, 50=0.14%, 100=2.60%, 250=49.67% 00:28:20.479 lat (usec) : 500=45.16%, 750=1.05%, 1000=0.48% 00:28:20.479 lat (msec) : 2=0.21%, 4=0.02%, 10=0.05%, 20=0.58%, 50=0.03% 00:28:20.479 cpu : usr=68.33%, sys=0.61%, ctx=165215, majf=0, minf=676 00:28:20.479 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:20.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.479 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.479 issued rwts: total=0,1029057,1029058,0 short=0,0,0,0 dropped=0,0,0,0 00:28:20.479 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:20.479 00:28:20.479 Run status group 0 (all jobs): 00:28:20.479 WRITE: bw=402MiB/s (421MB/s), 402MiB/s-402MiB/s (421MB/s-421MB/s), io=4020MiB (4215MB), run=10001-10001msec 00:28:20.479 TRIM: bw=402MiB/s (421MB/s), 402MiB/s-402MiB/s (421MB/s-421MB/s), io=4020MiB (4215MB), run=10001-10001msec 00:28:23.762 ----------------------------------------------------- 00:28:23.762 Suppressions used: 00:28:23.762 count bytes template 00:28:23.762 14 129 /usr/src/fio/parse.c 00:28:23.762 2 596 libcrypto.so 00:28:23.762 ----------------------------------------------------- 00:28:23.762 00:28:23.762 00:28:23.762 real 0m15.066s 00:28:23.762 user 1m42.511s 00:28:23.762 sys 0m2.025s 00:28:23.762 19:21:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:23.762 ************************************ 00:28:23.762 END TEST bdev_fio_trim 00:28:23.762 ************************************ 00:28:23.762 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:28:23.762 19:21:39 -- bdev/blockdev.sh@368 -- # rm -f 00:28:23.762 19:21:39 -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:23.762 19:21:39 -- bdev/blockdev.sh@370 -- # popd 00:28:23.762 /home/vagrant/spdk_repo/spdk 00:28:23.762 19:21:39 -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:28:23.762 00:28:23.762 real 0m31.068s 00:28:23.762 user 3m22.136s 00:28:23.762 sys 0m6.602s 00:28:23.762 19:21:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:23.762 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:28:23.762 ************************************ 00:28:23.762 END TEST bdev_fio 00:28:23.762 ************************************ 00:28:23.762 19:21:39 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:23.762 19:21:39 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:23.762 19:21:39 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:28:23.762 19:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:23.762 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:28:23.762 ************************************ 00:28:23.762 START TEST bdev_verify 00:28:23.762 ************************************ 00:28:23.762 19:21:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:23.762 [2024-04-18 19:21:39.285413] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:28:23.762 [2024-04-18 19:21:39.285639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118785 ] 00:28:23.762 [2024-04-18 19:21:39.479897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:24.020 [2024-04-18 19:21:39.792640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.020 [2024-04-18 19:21:39.792642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.585 [2024-04-18 19:21:40.300627] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:24.585 [2024-04-18 19:21:40.300785] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:24.585 [2024-04-18 19:21:40.308570] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:24.585 [2024-04-18 19:21:40.308631] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:24.585 [2024-04-18 19:21:40.316625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:28:24.585 [2024-04-18 19:21:40.316674] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:24.585 [2024-04-18 19:21:40.316737] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:24.852 [2024-04-18 19:21:40.578502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:28:24.852 [2024-04-18 19:21:40.578763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.852 [2024-04-18 19:21:40.578837] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:24.852 [2024-04-18 19:21:40.578871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.852 [2024-04-18 19:21:40.583333] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.852 [2024-04-18 19:21:40.583434] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:28:25.489 Running I/O for 5 seconds... 00:28:30.755 00:28:30.755 Latency(us) 00:28:30.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.755 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x1000 00:28:30.755 Malloc0 : 5.13 1122.58 4.39 0.00 0.00 113810.72 698.27 447392.43 00:28:30.755 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x1000 length 0x1000 00:28:30.755 Malloc0 : 5.30 1279.72 5.00 0.00 0.00 94548.09 577.34 193736.90 00:28:30.755 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x800 00:28:30.755 Malloc1p0 : 5.18 593.28 2.32 0.00 0.00 214768.48 3308.01 237677.23 00:28:30.755 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x800 length 0x800 00:28:30.755 Malloc1p0 : 5.09 629.07 2.46 0.00 0.00 202950.75 6179.11 281617.55 00:28:30.755 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x800 00:28:30.755 Malloc1p1 : 5.18 593.03 2.32 0.00 0.00 214253.62 3308.01 230686.72 00:28:30.755 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x800 length 0x800 00:28:30.755 Malloc1p1 : 5.09 628.81 2.46 0.00 0.00 202068.16 6366.35 267636.54 00:28:30.755 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x200 00:28:30.755 Malloc2p0 : 5.18 592.78 2.32 0.00 0.00 213752.54 3011.54 230686.72 00:28:30.755 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x200 length 0x200 00:28:30.755 Malloc2p0 : 5.23 636.43 2.49 0.00 0.00 198713.09 7489.83 252656.88 00:28:30.755 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x200 00:28:30.755 Malloc2p1 : 5.18 592.52 2.31 0.00 0.00 213194.19 4712.35 225693.50 00:28:30.755 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x200 length 0x200 00:28:30.755 Malloc2p1 : 5.23 636.18 2.49 0.00 0.00 197653.44 6584.81 238675.87 00:28:30.755 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x200 00:28:30.755 Malloc2p2 : 5.19 592.25 2.31 0.00 0.00 212595.34 2777.48 223696.21 00:28:30.755 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x200 length 0x200 00:28:30.755 Malloc2p2 : 5.23 635.93 2.48 0.00 0.00 196683.73 5554.96 226692.14 00:28:30.755 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x200 00:28:30.755 Malloc2p3 : 5.19 591.98 2.31 0.00 0.00 212007.02 4244.24 223696.21 00:28:30.755 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x200 length 0x200 00:28:30.755 Malloc2p3 : 5.24 635.68 2.48 0.00 0.00 195835.12 4712.35 214708.42 00:28:30.755 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x200 00:28:30.755 Malloc2p4 : 5.19 591.72 2.31 0.00 0.00 211430.81 2652.65 217704.35 00:28:30.755 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x200 length 0x200 00:28:30.755 Malloc2p4 : 5.24 635.43 2.48 0.00 0.00 195119.42 2824.29 211712.49 00:28:30.755 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x200 00:28:30.755 Malloc2p5 : 5.19 591.45 2.31 0.00 0.00 210913.21 2652.65 220700.28 00:28:30.755 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x200 length 0x200 00:28:30.755 Malloc2p5 : 5.24 635.18 2.48 0.00 0.00 194685.09 2200.14 208716.56 00:28:30.755 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x200 00:28:30.755 Malloc2p6 : 5.20 591.19 2.31 0.00 0.00 210424.83 5492.54 214708.42 00:28:30.755 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x200 length 0x200 00:28:30.755 Malloc2p6 : 5.24 634.90 2.48 0.00 0.00 194315.59 2746.27 211712.49 00:28:30.755 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x200 00:28:30.755 Malloc2p7 : 5.26 608.67 2.38 0.00 0.00 203586.44 3183.18 212711.13 00:28:30.755 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x200 length 0x200 00:28:30.755 Malloc2p7 : 5.24 634.63 2.48 0.00 0.00 193953.04 3401.63 209715.20 00:28:30.755 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.755 Verification LBA range: start 0x0 length 0x1000 00:28:30.756 TestPT : 5.26 594.05 2.32 0.00 0.00 207587.49 8800.55 207717.91 00:28:30.756 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.756 Verification LBA range: start 0x1000 length 0x1000 00:28:30.756 TestPT : 5.28 630.59 2.46 0.00 0.00 194676.80 29459.99 210713.84 00:28:30.756 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.756 Verification LBA range: start 0x0 length 0x2000 00:28:30.756 raid0 : 5.26 608.23 2.38 0.00 0.00 202578.35 3900.95 206719.27 00:28:30.756 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.756 Verification LBA range: start 0x2000 length 0x2000 00:28:30.756 raid0 : 5.29 652.88 2.55 0.00 0.00 187576.92 4993.22 195734.19 00:28:30.756 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.756 Verification LBA range: start 0x0 length 0x2000 00:28:30.756 concat0 : 5.26 607.96 2.37 0.00 0.00 202062.42 2309.36 205720.62 00:28:30.756 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.756 Verification LBA range: start 0x2000 length 0x2000 00:28:30.756 concat0 : 5.30 652.64 2.55 0.00 0.00 187094.61 2293.76 192738.26 00:28:30.756 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.756 Verification LBA range: start 0x0 length 0x1000 00:28:30.756 raid1 : 5.27 607.69 2.37 0.00 0.00 201558.80 6335.15 211712.49 00:28:30.756 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.756 Verification LBA range: start 0x1000 length 0x1000 00:28:30.756 raid1 : 5.30 652.39 2.55 0.00 0.00 186717.03 3776.12 189742.32 00:28:30.756 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.756 Verification LBA range: start 0x0 length 0x4e2 00:28:30.756 AIO0 : 5.27 607.46 2.37 0.00 0.00 200813.15 1006.45 216705.71 00:28:30.756 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.756 Verification LBA range: start 0x4e2 length 0x4e2 00:28:30.756 AIO0 : 5.30 652.01 2.55 0.00 0.00 186174.95 2184.53 194735.54 00:28:30.756 =================================================================================================================== 00:28:30.756 Total : 20949.30 81.83 0.00 0.00 189963.94 577.34 447392.43 00:28:34.048 00:28:34.048 real 0m10.403s 00:28:34.048 user 0m18.670s 00:28:34.048 sys 0m0.689s 00:28:34.048 19:21:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:34.048 19:21:49 -- common/autotest_common.sh@10 -- # set +x 00:28:34.048 ************************************ 00:28:34.048 END TEST bdev_verify 00:28:34.048 ************************************ 00:28:34.048 19:21:49 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:34.048 19:21:49 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:28:34.048 19:21:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.048 19:21:49 -- common/autotest_common.sh@10 -- # set +x 00:28:34.048 ************************************ 00:28:34.048 START TEST bdev_verify_big_io 00:28:34.048 ************************************ 00:28:34.048 19:21:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:34.048 [2024-04-18 19:21:49.753970] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:28:34.048 [2024-04-18 19:21:49.754240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118941 ] 00:28:34.048 [2024-04-18 19:21:49.943564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:34.316 [2024-04-18 19:21:50.238574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.316 [2024-04-18 19:21:50.238578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.891 [2024-04-18 19:21:50.801640] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:34.891 [2024-04-18 19:21:50.801768] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:34.891 [2024-04-18 19:21:50.809605] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:34.891 [2024-04-18 19:21:50.809656] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:34.891 [2024-04-18 19:21:50.817649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:28:34.891 [2024-04-18 19:21:50.817714] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:34.891 [2024-04-18 19:21:50.817764] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:35.457 [2024-04-18 19:21:51.086499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:28:35.457 [2024-04-18 19:21:51.086666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:35.457 [2024-04-18 19:21:51.086717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:35.457 [2024-04-18 19:21:51.086742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:35.457 [2024-04-18 19:21:51.090165] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:35.457 [2024-04-18 19:21:51.090226] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:28:35.715 [2024-04-18 19:21:51.594689] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:28:35.715 [2024-04-18 19:21:51.599651] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:28:35.715 [2024-04-18 19:21:51.605180] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:28:35.715 [2024-04-18 19:21:51.610739] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:28:35.715 [2024-04-18 19:21:51.615488] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:28:35.715 [2024-04-18 19:21:51.621036] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:28:35.715 [2024-04-18 19:21:51.625722] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:28:35.715 [2024-04-18 19:21:51.631184] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:28:35.715 [2024-04-18 19:21:51.635923] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:28:35.715 [2024-04-18 19:21:51.641470] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:28:35.974 [2024-04-18 19:21:51.646282] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:28:35.974 [2024-04-18 19:21:51.651751] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:28:35.974 [2024-04-18 19:21:51.656415] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:28:35.974 [2024-04-18 19:21:51.662044] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:28:35.974 [2024-04-18 19:21:51.667592] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:28:35.974 [2024-04-18 19:21:51.672340] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:28:35.974 [2024-04-18 19:21:51.793843] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:28:35.974 [2024-04-18 19:21:51.803661] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:28:35.974 Running I/O for 5 seconds... 00:28:42.530 00:28:42.530 Latency(us) 00:28:42.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.530 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x100 00:28:42.530 Malloc0 : 5.97 150.03 9.38 0.00 0.00 841183.19 760.69 1549895.19 00:28:42.530 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x100 length 0x100 00:28:42.530 Malloc0 : 5.58 160.69 10.04 0.00 0.00 783558.91 721.68 1709678.20 00:28:42.530 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x80 00:28:42.530 Malloc1p0 : 6.10 89.80 5.61 0.00 0.00 1354490.79 2715.06 2396745.14 00:28:42.530 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x80 length 0x80 00:28:42.530 Malloc1p0 : 6.28 38.20 2.39 0.00 0.00 3079739.83 1419.95 5272839.31 00:28:42.530 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x80 00:28:42.530 Malloc1p1 : 6.26 38.32 2.40 0.00 0.00 3093461.20 1490.16 5592405.33 00:28:42.530 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x80 length 0x80 00:28:42.530 Malloc1p1 : 6.31 40.58 2.54 0.00 0.00 2856446.55 1505.77 5145012.91 00:28:42.530 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x20 00:28:42.530 Malloc2p0 : 6.05 23.82 1.49 0.00 0.00 1238182.48 741.18 1549895.19 00:28:42.530 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x20 length 0x20 00:28:42.530 Malloc2p0 : 5.96 26.85 1.68 0.00 0.00 1081603.57 756.78 1557884.34 00:28:42.530 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x20 00:28:42.530 Malloc2p1 : 6.05 23.81 1.49 0.00 0.00 1231113.93 706.07 1549895.19 00:28:42.530 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x20 length 0x20 00:28:42.530 Malloc2p1 : 5.96 26.85 1.68 0.00 0.00 1074226.21 741.18 1549895.19 00:28:42.530 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x20 00:28:42.530 Malloc2p2 : 6.05 23.80 1.49 0.00 0.00 1223855.61 713.87 1549895.19 00:28:42.530 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x20 length 0x20 00:28:42.530 Malloc2p2 : 5.96 26.84 1.68 0.00 0.00 1066879.23 713.87 1549895.19 00:28:42.530 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x20 00:28:42.530 Malloc2p3 : 6.05 23.80 1.49 0.00 0.00 1216602.55 729.48 1541906.04 00:28:42.530 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x20 length 0x20 00:28:42.530 Malloc2p3 : 5.96 26.83 1.68 0.00 0.00 1060746.11 713.87 1541906.04 00:28:42.530 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x20 00:28:42.530 Malloc2p4 : 6.05 23.79 1.49 0.00 0.00 1209183.67 678.77 1541906.04 00:28:42.530 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x20 length 0x20 00:28:42.530 Malloc2p4 : 5.96 26.83 1.68 0.00 0.00 1054122.83 908.92 1541906.04 00:28:42.530 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x20 00:28:42.530 Malloc2p5 : 6.05 23.79 1.49 0.00 0.00 1202211.20 678.77 1549895.19 00:28:42.530 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x20 length 0x20 00:28:42.530 Malloc2p5 : 5.97 26.82 1.68 0.00 0.00 1047510.05 690.47 1533916.89 00:28:42.530 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x20 00:28:42.530 Malloc2p6 : 6.05 23.78 1.49 0.00 0.00 1195519.39 889.42 1549895.19 00:28:42.530 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x20 length 0x20 00:28:42.530 Malloc2p6 : 5.97 26.82 1.68 0.00 0.00 1040734.12 713.87 1525927.74 00:28:42.530 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x20 00:28:42.530 Malloc2p7 : 6.06 23.78 1.49 0.00 0.00 1188679.61 694.37 1549895.19 00:28:42.530 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x20 length 0x20 00:28:42.530 Malloc2p7 : 5.97 26.81 1.68 0.00 0.00 1032814.90 729.48 1509949.44 00:28:42.530 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x100 00:28:42.530 TestPT : 6.26 36.40 2.27 0.00 0.00 3033360.86 73899.64 4569794.07 00:28:42.530 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x100 length 0x100 00:28:42.530 TestPT : 6.33 40.42 2.53 0.00 0.00 2663223.10 93373.20 4282184.66 00:28:42.530 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x200 00:28:42.530 raid0 : 6.27 38.30 2.39 0.00 0.00 2810218.61 1513.57 5049143.10 00:28:42.530 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x200 length 0x200 00:28:42.530 raid0 : 6.35 45.36 2.84 0.00 0.00 2318218.28 1755.43 4729577.08 00:28:42.530 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x200 00:28:42.530 concat0 : 6.27 40.82 2.55 0.00 0.00 2586338.61 1693.01 4953273.30 00:28:42.530 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x200 length 0x200 00:28:42.530 concat0 : 6.31 53.21 3.33 0.00 0.00 1954278.84 1685.21 4601750.67 00:28:42.530 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x100 00:28:42.530 raid1 : 6.31 45.66 2.85 0.00 0.00 2260238.68 2137.72 4825446.89 00:28:42.530 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x100 length 0x100 00:28:42.530 raid1 : 6.32 65.38 4.09 0.00 0.00 1547648.16 1950.48 4473924.27 00:28:42.530 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x0 length 0x4e 00:28:42.530 AIO0 : 6.27 52.16 3.26 0.00 0.00 1191986.83 1708.62 3563161.11 00:28:42.530 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:28:42.530 Verification LBA range: start 0x4e length 0x4e 00:28:42.530 AIO0 : 6.40 92.55 5.78 0.00 0.00 654817.18 1248.30 2908050.77 00:28:42.530 =================================================================================================================== 00:28:42.530 Total : 1432.93 89.56 0.00 0.00 1518963.64 678.77 5592405.33 00:28:46.715 00:28:46.715 real 0m12.226s 00:28:46.715 user 0m22.317s 00:28:46.715 sys 0m0.749s 00:28:46.715 19:22:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:46.715 19:22:01 -- common/autotest_common.sh@10 -- # set +x 00:28:46.715 ************************************ 00:28:46.715 END TEST bdev_verify_big_io 00:28:46.715 ************************************ 00:28:46.715 19:22:01 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:46.715 19:22:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:46.715 19:22:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:46.715 19:22:01 -- common/autotest_common.sh@10 -- # set +x 00:28:46.715 ************************************ 00:28:46.716 START TEST bdev_write_zeroes 00:28:46.716 ************************************ 00:28:46.716 19:22:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:46.716 [2024-04-18 19:22:02.071101] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:28:46.716 [2024-04-18 19:22:02.071317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119132 ] 00:28:46.716 [2024-04-18 19:22:02.242804] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.716 [2024-04-18 19:22:02.537141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.280 [2024-04-18 19:22:03.076000] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:47.280 [2024-04-18 19:22:03.076169] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:47.280 [2024-04-18 19:22:03.083934] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:47.280 [2024-04-18 19:22:03.084001] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:47.280 [2024-04-18 19:22:03.092005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:28:47.280 [2024-04-18 19:22:03.092093] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:47.280 [2024-04-18 19:22:03.092150] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:47.537 [2024-04-18 19:22:03.380652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:28:47.537 [2024-04-18 19:22:03.380953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.537 [2024-04-18 19:22:03.381025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:47.537 [2024-04-18 19:22:03.381082] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.537 [2024-04-18 19:22:03.384932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.537 [2024-04-18 19:22:03.385091] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:28:48.100 Running I/O for 1 seconds... 00:28:49.481 00:28:49.481 Latency(us) 00:28:49.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.481 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc0 : 1.05 4494.43 17.56 0.00 0.00 28459.51 709.97 43191.34 00:28:49.481 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc1p0 : 1.06 4487.71 17.53 0.00 0.00 28448.95 936.23 42442.36 00:28:49.481 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc1p1 : 1.06 4481.55 17.51 0.00 0.00 28420.38 1068.86 41194.06 00:28:49.481 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc2p0 : 1.06 4475.50 17.48 0.00 0.00 28384.58 994.74 40195.41 00:28:49.481 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc2p1 : 1.06 4469.62 17.46 0.00 0.00 28356.26 951.83 39196.77 00:28:49.481 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc2p2 : 1.06 4463.23 17.43 0.00 0.00 28336.73 994.74 38198.13 00:28:49.481 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc2p3 : 1.06 4457.34 17.41 0.00 0.00 28327.20 944.03 37449.14 00:28:49.481 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc2p4 : 1.06 4451.46 17.39 0.00 0.00 28302.02 955.73 36450.50 00:28:49.481 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc2p5 : 1.07 4445.56 17.37 0.00 0.00 28275.57 928.43 35951.18 00:28:49.481 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc2p6 : 1.07 4439.68 17.34 0.00 0.00 28248.59 955.73 36200.84 00:28:49.481 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 Malloc2p7 : 1.07 4433.30 17.32 0.00 0.00 28238.94 975.24 36200.84 00:28:49.481 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 TestPT : 1.07 4427.35 17.29 0.00 0.00 28209.08 951.83 35951.18 00:28:49.481 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 raid0 : 1.07 4420.32 17.27 0.00 0.00 28176.94 1560.38 35951.18 00:28:49.481 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 concat0 : 1.07 4413.15 17.24 0.00 0.00 28122.99 1521.37 36200.84 00:28:49.481 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 raid1 : 1.08 4404.52 17.21 0.00 0.00 28068.90 2496.61 36450.50 00:28:49.481 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.481 AIO0 : 1.08 4393.97 17.16 0.00 0.00 28000.46 1396.54 36450.50 00:28:49.481 =================================================================================================================== 00:28:49.481 Total : 71158.69 277.96 0.00 0.00 28273.59 709.97 43191.34 00:28:52.760 00:28:52.760 real 0m6.243s 00:28:52.760 user 0m5.413s 00:28:52.760 sys 0m0.625s 00:28:52.760 19:22:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:52.760 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:28:52.760 ************************************ 00:28:52.760 END TEST bdev_write_zeroes 00:28:52.760 ************************************ 00:28:52.760 19:22:08 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:52.760 19:22:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:52.760 19:22:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:52.760 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:28:52.761 ************************************ 00:28:52.761 START TEST bdev_json_nonenclosed 00:28:52.761 ************************************ 00:28:52.761 19:22:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:52.761 [2024-04-18 19:22:08.404680] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:28:52.761 [2024-04-18 19:22:08.404925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119250 ] 00:28:52.761 [2024-04-18 19:22:08.585847] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.018 [2024-04-18 19:22:08.879141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.018 [2024-04-18 19:22:08.879300] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:53.018 [2024-04-18 19:22:08.879344] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:53.018 [2024-04-18 19:22:08.879401] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:53.584 00:28:53.584 real 0m1.146s 00:28:53.584 user 0m0.885s 00:28:53.584 sys 0m0.161s 00:28:53.584 19:22:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:53.584 19:22:09 -- common/autotest_common.sh@10 -- # set +x 00:28:53.584 ************************************ 00:28:53.584 END TEST bdev_json_nonenclosed 00:28:53.584 ************************************ 00:28:53.842 19:22:09 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:53.842 19:22:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:53.842 19:22:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:53.842 19:22:09 -- common/autotest_common.sh@10 -- # set +x 00:28:53.842 ************************************ 00:28:53.842 START TEST bdev_json_nonarray 00:28:53.842 ************************************ 00:28:53.842 19:22:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:53.842 [2024-04-18 19:22:09.657761] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:28:53.842 [2024-04-18 19:22:09.658517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119292 ] 00:28:54.099 [2024-04-18 19:22:09.830774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.357 [2024-04-18 19:22:10.070560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.357 [2024-04-18 19:22:10.070693] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:54.357 [2024-04-18 19:22:10.070729] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:54.357 [2024-04-18 19:22:10.070755] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:54.922 00:28:54.922 real 0m1.025s 00:28:54.922 user 0m0.772s 00:28:54.922 sys 0m0.152s 00:28:54.922 19:22:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:54.922 19:22:10 -- common/autotest_common.sh@10 -- # set +x 00:28:54.922 ************************************ 00:28:54.922 END TEST bdev_json_nonarray 00:28:54.922 ************************************ 00:28:54.922 19:22:10 -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:28:54.922 19:22:10 -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:28:54.922 19:22:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:54.922 19:22:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:54.922 19:22:10 -- common/autotest_common.sh@10 -- # set +x 00:28:54.922 ************************************ 00:28:54.922 START TEST bdev_qos 00:28:54.922 ************************************ 00:28:54.922 19:22:10 -- common/autotest_common.sh@1111 -- # qos_test_suite '' 00:28:54.922 19:22:10 -- bdev/blockdev.sh@446 -- # QOS_PID=119338 00:28:54.922 Process qos testing pid: 119338 00:28:54.922 19:22:10 -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 119338' 00:28:54.922 19:22:10 -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:28:54.922 19:22:10 -- bdev/blockdev.sh@449 -- # waitforlisten 119338 00:28:54.922 19:22:10 -- common/autotest_common.sh@817 -- # '[' -z 119338 ']' 00:28:54.922 19:22:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.922 19:22:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:54.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.922 19:22:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.923 19:22:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:54.923 19:22:10 -- common/autotest_common.sh@10 -- # set +x 00:28:54.923 19:22:10 -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:28:54.923 [2024-04-18 19:22:10.760589] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:28:54.923 [2024-04-18 19:22:10.761105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119338 ] 00:28:55.202 [2024-04-18 19:22:10.954615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.460 [2024-04-18 19:22:11.249694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.026 19:22:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:56.027 19:22:11 -- common/autotest_common.sh@850 -- # return 0 00:28:56.027 19:22:11 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:28:56.027 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.027 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.027 Malloc_0 00:28:56.027 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.027 19:22:11 -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:28:56.027 19:22:11 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_0 00:28:56.027 19:22:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:56.027 19:22:11 -- common/autotest_common.sh@887 -- # local i 00:28:56.027 19:22:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:56.027 19:22:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:56.027 19:22:11 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:28:56.027 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.027 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.027 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.027 19:22:11 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:28:56.027 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.027 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.027 [ 00:28:56.027 { 00:28:56.027 "name": "Malloc_0", 00:28:56.027 "aliases": [ 00:28:56.027 "54350ba8-c5de-4a43-b00b-14c4e0b6b331" 00:28:56.027 ], 00:28:56.027 "product_name": "Malloc disk", 00:28:56.027 "block_size": 512, 00:28:56.027 "num_blocks": 262144, 00:28:56.027 "uuid": "54350ba8-c5de-4a43-b00b-14c4e0b6b331", 00:28:56.027 "assigned_rate_limits": { 00:28:56.027 "rw_ios_per_sec": 0, 00:28:56.027 "rw_mbytes_per_sec": 0, 00:28:56.027 "r_mbytes_per_sec": 0, 00:28:56.027 "w_mbytes_per_sec": 0 00:28:56.027 }, 00:28:56.027 "claimed": false, 00:28:56.027 "zoned": false, 00:28:56.027 "supported_io_types": { 00:28:56.027 "read": true, 00:28:56.027 "write": true, 00:28:56.027 "unmap": true, 00:28:56.027 "write_zeroes": true, 00:28:56.027 "flush": true, 00:28:56.027 "reset": true, 00:28:56.027 "compare": false, 00:28:56.027 "compare_and_write": false, 00:28:56.027 "abort": true, 00:28:56.027 "nvme_admin": false, 00:28:56.027 "nvme_io": false 00:28:56.027 }, 00:28:56.027 "memory_domains": [ 00:28:56.027 { 00:28:56.027 "dma_device_id": "system", 00:28:56.027 "dma_device_type": 1 00:28:56.027 }, 00:28:56.027 { 00:28:56.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.027 "dma_device_type": 2 00:28:56.027 } 00:28:56.027 ], 00:28:56.027 "driver_specific": {} 00:28:56.027 } 00:28:56.027 ] 00:28:56.027 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.027 19:22:11 -- common/autotest_common.sh@893 -- # return 0 00:28:56.027 19:22:11 -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:28:56.027 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.027 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.027 Null_1 00:28:56.027 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.027 19:22:11 -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:28:56.027 19:22:11 -- common/autotest_common.sh@885 -- # local bdev_name=Null_1 00:28:56.027 19:22:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:56.027 19:22:11 -- common/autotest_common.sh@887 -- # local i 00:28:56.027 19:22:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:56.027 19:22:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:56.027 19:22:11 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:28:56.027 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.027 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.027 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.027 19:22:11 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:28:56.027 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.027 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.027 [ 00:28:56.027 { 00:28:56.027 "name": "Null_1", 00:28:56.027 "aliases": [ 00:28:56.027 "cbc3c59b-3fa5-4dfd-b726-555c6fcbe551" 00:28:56.027 ], 00:28:56.027 "product_name": "Null disk", 00:28:56.027 "block_size": 512, 00:28:56.027 "num_blocks": 262144, 00:28:56.027 "uuid": "cbc3c59b-3fa5-4dfd-b726-555c6fcbe551", 00:28:56.027 "assigned_rate_limits": { 00:28:56.027 "rw_ios_per_sec": 0, 00:28:56.027 "rw_mbytes_per_sec": 0, 00:28:56.027 "r_mbytes_per_sec": 0, 00:28:56.027 "w_mbytes_per_sec": 0 00:28:56.027 }, 00:28:56.027 "claimed": false, 00:28:56.027 "zoned": false, 00:28:56.027 "supported_io_types": { 00:28:56.027 "read": true, 00:28:56.027 "write": true, 00:28:56.027 "unmap": false, 00:28:56.027 "write_zeroes": true, 00:28:56.027 "flush": false, 00:28:56.027 "reset": true, 00:28:56.027 "compare": false, 00:28:56.027 "compare_and_write": false, 00:28:56.027 "abort": true, 00:28:56.027 "nvme_admin": false, 00:28:56.027 "nvme_io": false 00:28:56.027 }, 00:28:56.027 "driver_specific": {} 00:28:56.027 } 00:28:56.027 ] 00:28:56.027 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.027 19:22:11 -- common/autotest_common.sh@893 -- # return 0 00:28:56.027 19:22:11 -- bdev/blockdev.sh@457 -- # qos_function_test 00:28:56.027 19:22:11 -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:56.027 19:22:11 -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:28:56.027 19:22:11 -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:28:56.027 19:22:11 -- bdev/blockdev.sh@412 -- # local io_result=0 00:28:56.027 19:22:11 -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:28:56.027 19:22:11 -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:28:56.027 19:22:11 -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:28:56.027 19:22:11 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:28:56.027 19:22:11 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:28:56.027 19:22:11 -- bdev/blockdev.sh@377 -- # local iostat_result 00:28:56.027 19:22:11 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:28:56.027 19:22:11 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:28:56.027 19:22:11 -- bdev/blockdev.sh@378 -- # tail -1 00:28:56.285 Running I/O for 60 seconds... 00:29:01.629 19:22:17 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 69641.36 278565.44 0.00 0.00 281600.00 0.00 0.00 ' 00:29:01.629 19:22:17 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:29:01.629 19:22:17 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:29:01.629 19:22:17 -- bdev/blockdev.sh@380 -- # iostat_result=69641.36 00:29:01.629 19:22:17 -- bdev/blockdev.sh@385 -- # echo 69641 00:29:01.629 19:22:17 -- bdev/blockdev.sh@416 -- # io_result=69641 00:29:01.629 19:22:17 -- bdev/blockdev.sh@418 -- # iops_limit=17000 00:29:01.629 19:22:17 -- bdev/blockdev.sh@419 -- # '[' 17000 -gt 1000 ']' 00:29:01.629 19:22:17 -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:29:01.629 19:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.629 19:22:17 -- common/autotest_common.sh@10 -- # set +x 00:29:01.629 19:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.629 19:22:17 -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:29:01.629 19:22:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:01.629 19:22:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:01.629 19:22:17 -- common/autotest_common.sh@10 -- # set +x 00:29:01.629 ************************************ 00:29:01.629 START TEST bdev_qos_iops 00:29:01.629 ************************************ 00:29:01.629 19:22:17 -- common/autotest_common.sh@1111 -- # run_qos_test 17000 IOPS Malloc_0 00:29:01.629 19:22:17 -- bdev/blockdev.sh@389 -- # local qos_limit=17000 00:29:01.629 19:22:17 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:29:01.629 19:22:17 -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:29:01.629 19:22:17 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:29:01.629 19:22:17 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:29:01.629 19:22:17 -- bdev/blockdev.sh@377 -- # local iostat_result 00:29:01.629 19:22:17 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:29:01.629 19:22:17 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:29:01.629 19:22:17 -- bdev/blockdev.sh@378 -- # tail -1 00:29:07.124 19:22:22 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 17034.57 68138.27 0.00 0.00 69360.00 0.00 0.00 ' 00:29:07.124 19:22:22 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:29:07.124 19:22:22 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:29:07.124 19:22:22 -- bdev/blockdev.sh@380 -- # iostat_result=17034.57 00:29:07.124 19:22:22 -- bdev/blockdev.sh@385 -- # echo 17034 00:29:07.124 19:22:22 -- bdev/blockdev.sh@392 -- # qos_result=17034 00:29:07.124 19:22:22 -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:29:07.124 19:22:22 -- bdev/blockdev.sh@396 -- # lower_limit=15300 00:29:07.124 19:22:22 -- bdev/blockdev.sh@397 -- # upper_limit=18700 00:29:07.124 19:22:22 -- bdev/blockdev.sh@400 -- # '[' 17034 -lt 15300 ']' 00:29:07.124 19:22:22 -- bdev/blockdev.sh@400 -- # '[' 17034 -gt 18700 ']' 00:29:07.124 00:29:07.124 real 0m5.214s 00:29:07.124 user 0m0.108s 00:29:07.124 sys 0m0.025s 00:29:07.124 19:22:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:07.124 ************************************ 00:29:07.124 19:22:22 -- common/autotest_common.sh@10 -- # set +x 00:29:07.124 END TEST bdev_qos_iops 00:29:07.124 ************************************ 00:29:07.124 19:22:22 -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:29:07.124 19:22:22 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:29:07.124 19:22:22 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:29:07.124 19:22:22 -- bdev/blockdev.sh@377 -- # local iostat_result 00:29:07.124 19:22:22 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:29:07.124 19:22:22 -- bdev/blockdev.sh@378 -- # grep Null_1 00:29:07.124 19:22:22 -- bdev/blockdev.sh@378 -- # tail -1 00:29:12.429 19:22:27 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 26710.54 106842.14 0.00 0.00 108544.00 0.00 0.00 ' 00:29:12.429 19:22:27 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:29:12.429 19:22:27 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:29:12.429 19:22:27 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:29:12.429 19:22:27 -- bdev/blockdev.sh@382 -- # iostat_result=108544.00 00:29:12.429 19:22:27 -- bdev/blockdev.sh@385 -- # echo 108544 00:29:12.429 19:22:27 -- bdev/blockdev.sh@427 -- # bw_limit=108544 00:29:12.429 19:22:27 -- bdev/blockdev.sh@428 -- # bw_limit=10 00:29:12.429 19:22:27 -- bdev/blockdev.sh@429 -- # '[' 10 -lt 2 ']' 00:29:12.429 19:22:27 -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:29:12.429 19:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.429 19:22:27 -- common/autotest_common.sh@10 -- # set +x 00:29:12.429 19:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.429 19:22:27 -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:29:12.429 19:22:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:12.429 19:22:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:12.429 19:22:27 -- common/autotest_common.sh@10 -- # set +x 00:29:12.429 ************************************ 00:29:12.429 START TEST bdev_qos_bw 00:29:12.429 ************************************ 00:29:12.429 19:22:27 -- common/autotest_common.sh@1111 -- # run_qos_test 10 BANDWIDTH Null_1 00:29:12.429 19:22:27 -- bdev/blockdev.sh@389 -- # local qos_limit=10 00:29:12.429 19:22:27 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:29:12.429 19:22:27 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:29:12.429 19:22:27 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:29:12.429 19:22:27 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:29:12.429 19:22:27 -- bdev/blockdev.sh@377 -- # local iostat_result 00:29:12.429 19:22:27 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:29:12.429 19:22:27 -- bdev/blockdev.sh@378 -- # grep Null_1 00:29:12.429 19:22:27 -- bdev/blockdev.sh@378 -- # tail -1 00:29:17.702 19:22:32 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 2557.01 10228.06 0.00 0.00 10496.00 0.00 0.00 ' 00:29:17.702 19:22:32 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:29:17.702 19:22:32 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:29:17.702 19:22:32 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:29:17.702 19:22:32 -- bdev/blockdev.sh@382 -- # iostat_result=10496.00 00:29:17.702 19:22:32 -- bdev/blockdev.sh@385 -- # echo 10496 00:29:17.702 19:22:32 -- bdev/blockdev.sh@392 -- # qos_result=10496 00:29:17.702 19:22:32 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:29:17.702 19:22:32 -- bdev/blockdev.sh@394 -- # qos_limit=10240 00:29:17.702 19:22:32 -- bdev/blockdev.sh@396 -- # lower_limit=9216 00:29:17.702 19:22:32 -- bdev/blockdev.sh@397 -- # upper_limit=11264 00:29:17.702 19:22:32 -- bdev/blockdev.sh@400 -- # '[' 10496 -lt 9216 ']' 00:29:17.702 19:22:32 -- bdev/blockdev.sh@400 -- # '[' 10496 -gt 11264 ']' 00:29:17.702 00:29:17.702 real 0m5.230s 00:29:17.702 user 0m0.112s 00:29:17.702 sys 0m0.023s 00:29:17.702 19:22:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:17.702 19:22:32 -- common/autotest_common.sh@10 -- # set +x 00:29:17.702 ************************************ 00:29:17.702 END TEST bdev_qos_bw 00:29:17.702 ************************************ 00:29:17.702 19:22:32 -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:29:17.702 19:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.702 19:22:32 -- common/autotest_common.sh@10 -- # set +x 00:29:17.702 19:22:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.702 19:22:33 -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:29:17.702 19:22:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:17.702 19:22:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:17.702 19:22:33 -- common/autotest_common.sh@10 -- # set +x 00:29:17.702 ************************************ 00:29:17.702 START TEST bdev_qos_ro_bw 00:29:17.702 ************************************ 00:29:17.702 19:22:33 -- common/autotest_common.sh@1111 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:29:17.702 19:22:33 -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:29:17.702 19:22:33 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:29:17.702 19:22:33 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:29:17.702 19:22:33 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:29:17.702 19:22:33 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:29:17.702 19:22:33 -- bdev/blockdev.sh@377 -- # local iostat_result 00:29:17.702 19:22:33 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:29:17.702 19:22:33 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:29:17.702 19:22:33 -- bdev/blockdev.sh@378 -- # tail -1 00:29:22.965 19:22:38 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.87 2047.46 0.00 0.00 2068.00 0.00 0.00 ' 00:29:22.965 19:22:38 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:29:22.965 19:22:38 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:29:22.965 19:22:38 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:29:22.965 19:22:38 -- bdev/blockdev.sh@382 -- # iostat_result=2068.00 00:29:22.965 19:22:38 -- bdev/blockdev.sh@385 -- # echo 2068 00:29:22.965 19:22:38 -- bdev/blockdev.sh@392 -- # qos_result=2068 00:29:22.965 19:22:38 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:29:22.965 19:22:38 -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:29:22.965 19:22:38 -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:29:22.965 19:22:38 -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:29:22.965 19:22:38 -- bdev/blockdev.sh@400 -- # '[' 2068 -lt 1843 ']' 00:29:22.966 19:22:38 -- bdev/blockdev.sh@400 -- # '[' 2068 -gt 2252 ']' 00:29:22.966 00:29:22.966 real 0m5.168s 00:29:22.966 user 0m0.107s 00:29:22.966 sys 0m0.031s 00:29:22.966 19:22:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:22.966 ************************************ 00:29:22.966 END TEST bdev_qos_ro_bw 00:29:22.966 19:22:38 -- common/autotest_common.sh@10 -- # set +x 00:29:22.966 ************************************ 00:29:22.966 19:22:38 -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:29:22.966 19:22:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.966 19:22:38 -- common/autotest_common.sh@10 -- # set +x 00:29:23.222 19:22:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.222 19:22:38 -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:29:23.222 19:22:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.222 19:22:38 -- common/autotest_common.sh@10 -- # set +x 00:29:23.222 00:29:23.222 Latency(us) 00:29:23.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.222 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:29:23.222 Malloc_0 : 26.74 23389.78 91.37 0.00 0.00 10840.65 1934.87 503316.48 00:29:23.222 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:29:23.222 Null_1 : 26.99 23819.06 93.04 0.00 0.00 10722.14 655.36 250659.60 00:29:23.222 =================================================================================================================== 00:29:23.222 Total : 47208.84 184.41 0.00 0.00 10780.58 655.36 503316.48 00:29:23.222 0 00:29:23.222 19:22:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.222 19:22:39 -- bdev/blockdev.sh@461 -- # killprocess 119338 00:29:23.222 19:22:39 -- common/autotest_common.sh@936 -- # '[' -z 119338 ']' 00:29:23.222 19:22:39 -- common/autotest_common.sh@940 -- # kill -0 119338 00:29:23.222 19:22:39 -- common/autotest_common.sh@941 -- # uname 00:29:23.222 19:22:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:23.222 19:22:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119338 00:29:23.223 19:22:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:23.223 killing process with pid 119338 00:29:23.223 Received shutdown signal, test time was about 27.028587 seconds 00:29:23.223 00:29:23.223 Latency(us) 00:29:23.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.223 =================================================================================================================== 00:29:23.223 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.223 19:22:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:23.223 19:22:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119338' 00:29:23.223 19:22:39 -- common/autotest_common.sh@955 -- # kill 119338 00:29:23.223 19:22:39 -- common/autotest_common.sh@960 -- # wait 119338 00:29:25.176 19:22:40 -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:29:25.176 00:29:25.176 real 0m30.122s 00:29:25.176 user 0m30.863s 00:29:25.176 sys 0m0.675s 00:29:25.176 19:22:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:25.176 19:22:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.176 ************************************ 00:29:25.176 END TEST bdev_qos 00:29:25.176 ************************************ 00:29:25.176 19:22:40 -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:29:25.176 19:22:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:25.176 19:22:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:25.176 19:22:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.176 ************************************ 00:29:25.176 START TEST bdev_qd_sampling 00:29:25.176 ************************************ 00:29:25.176 19:22:40 -- common/autotest_common.sh@1111 -- # qd_sampling_test_suite '' 00:29:25.176 19:22:40 -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:29:25.176 19:22:40 -- bdev/blockdev.sh@541 -- # QD_PID=119886 00:29:25.176 19:22:40 -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:29:25.176 19:22:40 -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 119886' 00:29:25.176 Process bdev QD sampling period testing pid: 119886 00:29:25.176 19:22:40 -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:29:25.176 19:22:40 -- bdev/blockdev.sh@544 -- # waitforlisten 119886 00:29:25.176 19:22:40 -- common/autotest_common.sh@817 -- # '[' -z 119886 ']' 00:29:25.176 19:22:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.176 19:22:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:25.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.176 19:22:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.176 19:22:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:25.176 19:22:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.176 [2024-04-18 19:22:40.971151] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:29:25.176 [2024-04-18 19:22:40.971353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119886 ] 00:29:25.435 [2024-04-18 19:22:41.154573] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:25.694 [2024-04-18 19:22:41.458869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.694 [2024-04-18 19:22:41.458876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.262 19:22:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:26.262 19:22:41 -- common/autotest_common.sh@850 -- # return 0 00:29:26.262 19:22:41 -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:29:26.262 19:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.262 19:22:41 -- common/autotest_common.sh@10 -- # set +x 00:29:26.262 Malloc_QD 00:29:26.262 19:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.262 19:22:42 -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:29:26.262 19:22:42 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_QD 00:29:26.262 19:22:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:26.262 19:22:42 -- common/autotest_common.sh@887 -- # local i 00:29:26.262 19:22:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:26.262 19:22:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:26.262 19:22:42 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:29:26.262 19:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.262 19:22:42 -- common/autotest_common.sh@10 -- # set +x 00:29:26.262 19:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.262 19:22:42 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:29:26.262 19:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.262 19:22:42 -- common/autotest_common.sh@10 -- # set +x 00:29:26.262 [ 00:29:26.262 { 00:29:26.262 "name": "Malloc_QD", 00:29:26.262 "aliases": [ 00:29:26.262 "acf8658d-add9-4ea0-974c-d50be93db07a" 00:29:26.262 ], 00:29:26.262 "product_name": "Malloc disk", 00:29:26.262 "block_size": 512, 00:29:26.262 "num_blocks": 262144, 00:29:26.262 "uuid": "acf8658d-add9-4ea0-974c-d50be93db07a", 00:29:26.262 "assigned_rate_limits": { 00:29:26.262 "rw_ios_per_sec": 0, 00:29:26.262 "rw_mbytes_per_sec": 0, 00:29:26.262 "r_mbytes_per_sec": 0, 00:29:26.262 "w_mbytes_per_sec": 0 00:29:26.262 }, 00:29:26.262 "claimed": false, 00:29:26.262 "zoned": false, 00:29:26.262 "supported_io_types": { 00:29:26.262 "read": true, 00:29:26.262 "write": true, 00:29:26.262 "unmap": true, 00:29:26.262 "write_zeroes": true, 00:29:26.262 "flush": true, 00:29:26.262 "reset": true, 00:29:26.262 "compare": false, 00:29:26.262 "compare_and_write": false, 00:29:26.262 "abort": true, 00:29:26.262 "nvme_admin": false, 00:29:26.262 "nvme_io": false 00:29:26.262 }, 00:29:26.262 "memory_domains": [ 00:29:26.262 { 00:29:26.262 "dma_device_id": "system", 00:29:26.262 "dma_device_type": 1 00:29:26.262 }, 00:29:26.262 { 00:29:26.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:26.262 "dma_device_type": 2 00:29:26.262 } 00:29:26.262 ], 00:29:26.262 "driver_specific": {} 00:29:26.262 } 00:29:26.262 ] 00:29:26.262 19:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.262 19:22:42 -- common/autotest_common.sh@893 -- # return 0 00:29:26.262 19:22:42 -- bdev/blockdev.sh@550 -- # sleep 2 00:29:26.262 19:22:42 -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:26.521 Running I/O for 5 seconds... 00:29:28.429 19:22:44 -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:29:28.429 19:22:44 -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:29:28.429 19:22:44 -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:29:28.429 19:22:44 -- bdev/blockdev.sh@521 -- # local iostats 00:29:28.429 19:22:44 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:29:28.429 19:22:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.429 19:22:44 -- common/autotest_common.sh@10 -- # set +x 00:29:28.429 19:22:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.429 19:22:44 -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:29:28.429 19:22:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.429 19:22:44 -- common/autotest_common.sh@10 -- # set +x 00:29:28.429 19:22:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.429 19:22:44 -- bdev/blockdev.sh@525 -- # iostats='{ 00:29:28.429 "tick_rate": 2100000000, 00:29:28.429 "ticks": 1817046418340, 00:29:28.429 "bdevs": [ 00:29:28.429 { 00:29:28.429 "name": "Malloc_QD", 00:29:28.429 "bytes_read": 830509568, 00:29:28.429 "num_read_ops": 202755, 00:29:28.429 "bytes_written": 0, 00:29:28.429 "num_write_ops": 0, 00:29:28.429 "bytes_unmapped": 0, 00:29:28.429 "num_unmap_ops": 0, 00:29:28.429 "bytes_copied": 0, 00:29:28.429 "num_copy_ops": 0, 00:29:28.429 "read_latency_ticks": 2051052110524, 00:29:28.429 "max_read_latency_ticks": 20676724, 00:29:28.429 "min_read_latency_ticks": 322772, 00:29:28.429 "write_latency_ticks": 0, 00:29:28.429 "max_write_latency_ticks": 0, 00:29:28.429 "min_write_latency_ticks": 0, 00:29:28.429 "unmap_latency_ticks": 0, 00:29:28.429 "max_unmap_latency_ticks": 0, 00:29:28.429 "min_unmap_latency_ticks": 0, 00:29:28.429 "copy_latency_ticks": 0, 00:29:28.429 "max_copy_latency_ticks": 0, 00:29:28.429 "min_copy_latency_ticks": 0, 00:29:28.429 "io_error": {}, 00:29:28.429 "queue_depth_polling_period": 10, 00:29:28.429 "queue_depth": 512, 00:29:28.429 "io_time": 20, 00:29:28.429 "weighted_io_time": 10240 00:29:28.429 } 00:29:28.429 ] 00:29:28.429 }' 00:29:28.430 19:22:44 -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:29:28.430 19:22:44 -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:29:28.430 19:22:44 -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:29:28.430 19:22:44 -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:29:28.430 19:22:44 -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:29:28.430 19:22:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.430 19:22:44 -- common/autotest_common.sh@10 -- # set +x 00:29:28.430 00:29:28.430 Latency(us) 00:29:28.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.430 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:29:28.430 Malloc_QD : 1.98 51973.25 203.02 0.00 0.00 4913.31 1271.71 9861.61 00:29:28.430 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:29:28.430 Malloc_QD : 1.98 54114.16 211.38 0.00 0.00 4719.62 858.21 5118.05 00:29:28.430 =================================================================================================================== 00:29:28.430 Total : 106087.41 414.40 0.00 0.00 4814.45 858.21 9861.61 00:29:28.688 0 00:29:28.688 19:22:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.688 19:22:44 -- bdev/blockdev.sh@554 -- # killprocess 119886 00:29:28.688 19:22:44 -- common/autotest_common.sh@936 -- # '[' -z 119886 ']' 00:29:28.688 19:22:44 -- common/autotest_common.sh@940 -- # kill -0 119886 00:29:28.688 19:22:44 -- common/autotest_common.sh@941 -- # uname 00:29:28.688 19:22:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:28.688 19:22:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119886 00:29:28.688 19:22:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:28.688 killing process with pid 119886 00:29:28.688 19:22:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:28.688 19:22:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119886' 00:29:28.688 Received shutdown signal, test time was about 2.152362 seconds 00:29:28.688 00:29:28.688 Latency(us) 00:29:28.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.688 =================================================================================================================== 00:29:28.688 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.688 19:22:44 -- common/autotest_common.sh@955 -- # kill 119886 00:29:28.688 19:22:44 -- common/autotest_common.sh@960 -- # wait 119886 00:29:30.588 19:22:46 -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:29:30.588 00:29:30.588 real 0m5.306s 00:29:30.588 user 0m9.679s 00:29:30.588 sys 0m0.430s 00:29:30.588 ************************************ 00:29:30.588 END TEST bdev_qd_sampling 00:29:30.588 ************************************ 00:29:30.588 19:22:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:30.588 19:22:46 -- common/autotest_common.sh@10 -- # set +x 00:29:30.588 19:22:46 -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:29:30.588 19:22:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:30.588 19:22:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:30.588 19:22:46 -- common/autotest_common.sh@10 -- # set +x 00:29:30.588 ************************************ 00:29:30.588 START TEST bdev_error 00:29:30.588 ************************************ 00:29:30.588 19:22:46 -- common/autotest_common.sh@1111 -- # error_test_suite '' 00:29:30.588 19:22:46 -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:29:30.588 19:22:46 -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:29:30.588 19:22:46 -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:29:30.588 19:22:46 -- bdev/blockdev.sh@472 -- # ERR_PID=119990 00:29:30.588 19:22:46 -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 119990' 00:29:30.588 Process error testing pid: 119990 00:29:30.588 19:22:46 -- bdev/blockdev.sh@474 -- # waitforlisten 119990 00:29:30.588 19:22:46 -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:29:30.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.588 19:22:46 -- common/autotest_common.sh@817 -- # '[' -z 119990 ']' 00:29:30.588 19:22:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.588 19:22:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:30.588 19:22:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.588 19:22:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:30.588 19:22:46 -- common/autotest_common.sh@10 -- # set +x 00:29:30.588 [2024-04-18 19:22:46.370277] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:29:30.588 [2024-04-18 19:22:46.370635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119990 ] 00:29:30.846 [2024-04-18 19:22:46.557069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.103 [2024-04-18 19:22:46.848977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.668 19:22:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:31.668 19:22:47 -- common/autotest_common.sh@850 -- # return 0 00:29:31.668 19:22:47 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:29:31.668 19:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.668 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.668 Dev_1 00:29:31.668 19:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.668 19:22:47 -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:29:31.668 19:22:47 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:29:31.668 19:22:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:31.668 19:22:47 -- common/autotest_common.sh@887 -- # local i 00:29:31.668 19:22:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:31.668 19:22:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:31.668 19:22:47 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:29:31.668 19:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.668 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.669 19:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.669 19:22:47 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:29:31.669 19:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.669 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.669 [ 00:29:31.669 { 00:29:31.669 "name": "Dev_1", 00:29:31.669 "aliases": [ 00:29:31.669 "3c6cd08b-7e2b-460d-b049-cf86b69c8f9a" 00:29:31.669 ], 00:29:31.669 "product_name": "Malloc disk", 00:29:31.669 "block_size": 512, 00:29:31.669 "num_blocks": 262144, 00:29:31.669 "uuid": "3c6cd08b-7e2b-460d-b049-cf86b69c8f9a", 00:29:31.669 "assigned_rate_limits": { 00:29:31.669 "rw_ios_per_sec": 0, 00:29:31.669 "rw_mbytes_per_sec": 0, 00:29:31.669 "r_mbytes_per_sec": 0, 00:29:31.669 "w_mbytes_per_sec": 0 00:29:31.669 }, 00:29:31.669 "claimed": false, 00:29:31.669 "zoned": false, 00:29:31.669 "supported_io_types": { 00:29:31.669 "read": true, 00:29:31.669 "write": true, 00:29:31.669 "unmap": true, 00:29:31.669 "write_zeroes": true, 00:29:31.669 "flush": true, 00:29:31.669 "reset": true, 00:29:31.669 "compare": false, 00:29:31.669 "compare_and_write": false, 00:29:31.669 "abort": true, 00:29:31.669 "nvme_admin": false, 00:29:31.669 "nvme_io": false 00:29:31.669 }, 00:29:31.669 "memory_domains": [ 00:29:31.669 { 00:29:31.669 "dma_device_id": "system", 00:29:31.669 "dma_device_type": 1 00:29:31.669 }, 00:29:31.669 { 00:29:31.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:31.669 "dma_device_type": 2 00:29:31.669 } 00:29:31.669 ], 00:29:31.669 "driver_specific": {} 00:29:31.669 } 00:29:31.669 ] 00:29:31.669 19:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.669 19:22:47 -- common/autotest_common.sh@893 -- # return 0 00:29:31.669 19:22:47 -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:29:31.669 19:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.669 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.669 true 00:29:31.669 19:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.669 19:22:47 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:29:31.669 19:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.669 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 Dev_2 00:29:31.926 19:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.926 19:22:47 -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:29:31.926 19:22:47 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:29:31.926 19:22:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:31.926 19:22:47 -- common/autotest_common.sh@887 -- # local i 00:29:31.926 19:22:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:31.926 19:22:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:31.926 19:22:47 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:29:31.926 19:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.926 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 19:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.926 19:22:47 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:29:31.926 19:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.926 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.926 [ 00:29:31.926 { 00:29:31.926 "name": "Dev_2", 00:29:31.926 "aliases": [ 00:29:31.926 "a58e5688-504a-4c91-b940-71bfe62c46b8" 00:29:31.926 ], 00:29:31.926 "product_name": "Malloc disk", 00:29:31.926 "block_size": 512, 00:29:31.926 "num_blocks": 262144, 00:29:31.926 "uuid": "a58e5688-504a-4c91-b940-71bfe62c46b8", 00:29:31.926 "assigned_rate_limits": { 00:29:31.926 "rw_ios_per_sec": 0, 00:29:31.926 "rw_mbytes_per_sec": 0, 00:29:31.926 "r_mbytes_per_sec": 0, 00:29:31.926 "w_mbytes_per_sec": 0 00:29:31.926 }, 00:29:31.926 "claimed": false, 00:29:31.926 "zoned": false, 00:29:31.926 "supported_io_types": { 00:29:31.926 "read": true, 00:29:31.926 "write": true, 00:29:31.926 "unmap": true, 00:29:31.926 "write_zeroes": true, 00:29:31.927 "flush": true, 00:29:31.927 "reset": true, 00:29:31.927 "compare": false, 00:29:31.927 "compare_and_write": false, 00:29:31.927 "abort": true, 00:29:31.927 "nvme_admin": false, 00:29:31.927 "nvme_io": false 00:29:31.927 }, 00:29:31.927 "memory_domains": [ 00:29:31.927 { 00:29:31.927 "dma_device_id": "system", 00:29:31.927 "dma_device_type": 1 00:29:31.927 }, 00:29:31.927 { 00:29:31.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:31.927 "dma_device_type": 2 00:29:31.927 } 00:29:31.927 ], 00:29:31.927 "driver_specific": {} 00:29:31.927 } 00:29:31.927 ] 00:29:31.927 19:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.927 19:22:47 -- common/autotest_common.sh@893 -- # return 0 00:29:31.927 19:22:47 -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:29:31.927 19:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.927 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.927 19:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.927 19:22:47 -- bdev/blockdev.sh@484 -- # sleep 1 00:29:31.927 19:22:47 -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:29:32.184 Running I/O for 5 seconds... 00:29:33.119 19:22:48 -- bdev/blockdev.sh@487 -- # kill -0 119990 00:29:33.119 19:22:48 -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 119990' 00:29:33.119 Process is existed as continue on error is set. Pid: 119990 00:29:33.119 19:22:48 -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:29:33.119 19:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.119 19:22:48 -- common/autotest_common.sh@10 -- # set +x 00:29:33.119 19:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.119 19:22:48 -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:29:33.119 19:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.119 19:22:48 -- common/autotest_common.sh@10 -- # set +x 00:29:33.119 Timeout while waiting for response: 00:29:33.119 00:29:33.119 00:29:33.377 19:22:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.377 19:22:49 -- bdev/blockdev.sh@497 -- # sleep 5 00:29:37.581 00:29:37.581 Latency(us) 00:29:37.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.581 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:29:37.581 EE_Dev_1 : 0.89 38263.91 149.47 5.61 0.00 415.08 123.86 2293.76 00:29:37.581 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:29:37.581 Dev_2 : 5.00 85138.43 332.57 0.00 0.00 185.25 57.78 393465.66 00:29:37.581 =================================================================================================================== 00:29:37.581 Total : 123402.34 482.04 5.61 0.00 202.29 57.78 393465.66 00:29:38.517 19:22:54 -- bdev/blockdev.sh@499 -- # killprocess 119990 00:29:38.517 19:22:54 -- common/autotest_common.sh@936 -- # '[' -z 119990 ']' 00:29:38.517 19:22:54 -- common/autotest_common.sh@940 -- # kill -0 119990 00:29:38.517 19:22:54 -- common/autotest_common.sh@941 -- # uname 00:29:38.517 19:22:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:38.517 19:22:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119990 00:29:38.517 19:22:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:38.517 killing process with pid 119990 00:29:38.517 19:22:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:38.517 19:22:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119990' 00:29:38.517 Received shutdown signal, test time was about 5.000000 seconds 00:29:38.517 00:29:38.517 Latency(us) 00:29:38.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.517 =================================================================================================================== 00:29:38.517 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.517 19:22:54 -- common/autotest_common.sh@955 -- # kill 119990 00:29:38.517 19:22:54 -- common/autotest_common.sh@960 -- # wait 119990 00:29:40.420 19:22:56 -- bdev/blockdev.sh@503 -- # ERR_PID=120138 00:29:40.420 19:22:56 -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 120138' 00:29:40.420 Process error testing pid: 120138 00:29:40.420 19:22:56 -- bdev/blockdev.sh@505 -- # waitforlisten 120138 00:29:40.420 19:22:56 -- common/autotest_common.sh@817 -- # '[' -z 120138 ']' 00:29:40.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.420 19:22:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.420 19:22:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:40.420 19:22:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.420 19:22:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:40.420 19:22:56 -- common/autotest_common.sh@10 -- # set +x 00:29:40.420 19:22:56 -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:29:40.420 [2024-04-18 19:22:56.280238] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:29:40.420 [2024-04-18 19:22:56.280446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120138 ] 00:29:40.678 [2024-04-18 19:22:56.458756] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.937 [2024-04-18 19:22:56.783169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.502 19:22:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:41.502 19:22:57 -- common/autotest_common.sh@850 -- # return 0 00:29:41.502 19:22:57 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:29:41.502 19:22:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.503 19:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:41.761 Dev_1 00:29:41.761 19:22:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.761 19:22:57 -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:29:41.761 19:22:57 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:29:41.761 19:22:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:41.761 19:22:57 -- common/autotest_common.sh@887 -- # local i 00:29:41.761 19:22:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:41.761 19:22:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:41.761 19:22:57 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:29:41.761 19:22:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.761 19:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:41.761 19:22:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.761 19:22:57 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:29:41.761 19:22:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.761 19:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:41.761 [ 00:29:41.761 { 00:29:41.761 "name": "Dev_1", 00:29:41.761 "aliases": [ 00:29:41.761 "3461fc9f-831f-4c08-9900-85c3766fd22a" 00:29:41.761 ], 00:29:41.761 "product_name": "Malloc disk", 00:29:41.761 "block_size": 512, 00:29:41.761 "num_blocks": 262144, 00:29:41.761 "uuid": "3461fc9f-831f-4c08-9900-85c3766fd22a", 00:29:41.761 "assigned_rate_limits": { 00:29:41.761 "rw_ios_per_sec": 0, 00:29:41.761 "rw_mbytes_per_sec": 0, 00:29:41.761 "r_mbytes_per_sec": 0, 00:29:41.761 "w_mbytes_per_sec": 0 00:29:41.761 }, 00:29:41.761 "claimed": false, 00:29:41.761 "zoned": false, 00:29:41.761 "supported_io_types": { 00:29:41.761 "read": true, 00:29:41.761 "write": true, 00:29:41.761 "unmap": true, 00:29:41.761 "write_zeroes": true, 00:29:41.761 "flush": true, 00:29:41.761 "reset": true, 00:29:41.761 "compare": false, 00:29:41.761 "compare_and_write": false, 00:29:41.761 "abort": true, 00:29:41.761 "nvme_admin": false, 00:29:41.761 "nvme_io": false 00:29:41.761 }, 00:29:41.761 "memory_domains": [ 00:29:41.761 { 00:29:41.761 "dma_device_id": "system", 00:29:41.761 "dma_device_type": 1 00:29:41.761 }, 00:29:41.761 { 00:29:41.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:41.761 "dma_device_type": 2 00:29:41.761 } 00:29:41.761 ], 00:29:41.761 "driver_specific": {} 00:29:41.761 } 00:29:41.761 ] 00:29:41.761 19:22:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.761 19:22:57 -- common/autotest_common.sh@893 -- # return 0 00:29:41.761 19:22:57 -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:29:41.761 19:22:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.761 19:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:41.761 true 00:29:41.761 19:22:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.761 19:22:57 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:29:41.761 19:22:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.761 19:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:41.761 Dev_2 00:29:41.761 19:22:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.761 19:22:57 -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:29:41.761 19:22:57 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:29:41.761 19:22:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:41.761 19:22:57 -- common/autotest_common.sh@887 -- # local i 00:29:41.761 19:22:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:41.761 19:22:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:41.761 19:22:57 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:29:41.761 19:22:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.761 19:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:41.761 19:22:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.761 19:22:57 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:29:41.761 19:22:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.761 19:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:42.020 [ 00:29:42.020 { 00:29:42.020 "name": "Dev_2", 00:29:42.020 "aliases": [ 00:29:42.020 "d9407837-6d3c-4f2d-94ea-4aa881f10f79" 00:29:42.020 ], 00:29:42.020 "product_name": "Malloc disk", 00:29:42.020 "block_size": 512, 00:29:42.020 "num_blocks": 262144, 00:29:42.020 "uuid": "d9407837-6d3c-4f2d-94ea-4aa881f10f79", 00:29:42.020 "assigned_rate_limits": { 00:29:42.020 "rw_ios_per_sec": 0, 00:29:42.020 "rw_mbytes_per_sec": 0, 00:29:42.020 "r_mbytes_per_sec": 0, 00:29:42.020 "w_mbytes_per_sec": 0 00:29:42.020 }, 00:29:42.020 "claimed": false, 00:29:42.020 "zoned": false, 00:29:42.020 "supported_io_types": { 00:29:42.020 "read": true, 00:29:42.020 "write": true, 00:29:42.020 "unmap": true, 00:29:42.020 "write_zeroes": true, 00:29:42.020 "flush": true, 00:29:42.020 "reset": true, 00:29:42.020 "compare": false, 00:29:42.020 "compare_and_write": false, 00:29:42.020 "abort": true, 00:29:42.020 "nvme_admin": false, 00:29:42.020 "nvme_io": false 00:29:42.020 }, 00:29:42.020 "memory_domains": [ 00:29:42.020 { 00:29:42.020 "dma_device_id": "system", 00:29:42.020 "dma_device_type": 1 00:29:42.020 }, 00:29:42.020 { 00:29:42.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:42.020 "dma_device_type": 2 00:29:42.020 } 00:29:42.020 ], 00:29:42.020 "driver_specific": {} 00:29:42.020 } 00:29:42.020 ] 00:29:42.020 19:22:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.020 19:22:57 -- common/autotest_common.sh@893 -- # return 0 00:29:42.020 19:22:57 -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:29:42.020 19:22:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.020 19:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:42.020 19:22:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.020 19:22:57 -- bdev/blockdev.sh@515 -- # NOT wait 120138 00:29:42.020 19:22:57 -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:29:42.020 19:22:57 -- common/autotest_common.sh@638 -- # local es=0 00:29:42.020 19:22:57 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 120138 00:29:42.020 19:22:57 -- common/autotest_common.sh@626 -- # local arg=wait 00:29:42.020 19:22:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:42.020 19:22:57 -- common/autotest_common.sh@630 -- # type -t wait 00:29:42.020 19:22:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:42.020 19:22:57 -- common/autotest_common.sh@641 -- # wait 120138 00:29:42.020 Running I/O for 5 seconds... 00:29:42.020 task offset: 156760 on job bdev=EE_Dev_1 fails 00:29:42.020 00:29:42.020 Latency(us) 00:29:42.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.020 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:29:42.020 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:29:42.020 EE_Dev_1 : 0.00 30985.92 121.04 7042.25 0.00 341.97 126.78 620.25 00:29:42.020 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:29:42.020 Dev_2 : 0.00 20163.83 78.76 0.00 0.00 572.72 122.88 1061.06 00:29:42.020 =================================================================================================================== 00:29:42.020 Total : 51149.75 199.80 7042.25 0.00 467.12 122.88 1061.06 00:29:42.020 [2024-04-18 19:22:57.833688] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:42.020 request: 00:29:42.020 { 00:29:42.020 "method": "perform_tests", 00:29:42.020 "req_id": 1 00:29:42.020 } 00:29:42.020 Got JSON-RPC error response 00:29:42.020 response: 00:29:42.020 { 00:29:42.020 "code": -32603, 00:29:42.020 "message": "bdevperf failed with error Operation not permitted" 00:29:42.020 } 00:29:44.554 19:23:00 -- common/autotest_common.sh@641 -- # es=255 00:29:44.554 19:23:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:44.554 19:23:00 -- common/autotest_common.sh@650 -- # es=127 00:29:44.554 19:23:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:29:44.554 19:23:00 -- common/autotest_common.sh@658 -- # es=1 00:29:44.554 19:23:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:44.554 00:29:44.554 real 0m13.874s 00:29:44.554 user 0m14.146s 00:29:44.554 sys 0m0.952s 00:29:44.554 19:23:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:44.554 19:23:00 -- common/autotest_common.sh@10 -- # set +x 00:29:44.554 ************************************ 00:29:44.554 END TEST bdev_error 00:29:44.554 ************************************ 00:29:44.554 19:23:00 -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:29:44.554 19:23:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:44.554 19:23:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:44.554 19:23:00 -- common/autotest_common.sh@10 -- # set +x 00:29:44.554 ************************************ 00:29:44.554 START TEST bdev_stat 00:29:44.554 ************************************ 00:29:44.554 19:23:00 -- common/autotest_common.sh@1111 -- # stat_test_suite '' 00:29:44.554 19:23:00 -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:29:44.554 19:23:00 -- bdev/blockdev.sh@596 -- # STAT_PID=120234 00:29:44.554 Process Bdev IO statistics testing pid: 120234 00:29:44.554 19:23:00 -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 120234' 00:29:44.554 19:23:00 -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:29:44.554 19:23:00 -- bdev/blockdev.sh@599 -- # waitforlisten 120234 00:29:44.554 19:23:00 -- common/autotest_common.sh@817 -- # '[' -z 120234 ']' 00:29:44.554 19:23:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.554 19:23:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:44.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.554 19:23:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.554 19:23:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:44.554 19:23:00 -- common/autotest_common.sh@10 -- # set +x 00:29:44.554 19:23:00 -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:29:44.554 [2024-04-18 19:23:00.345359] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:29:44.554 [2024-04-18 19:23:00.345750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120234 ] 00:29:44.811 [2024-04-18 19:23:00.536953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:45.069 [2024-04-18 19:23:00.772128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.069 [2024-04-18 19:23:00.772130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.669 19:23:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:45.669 19:23:01 -- common/autotest_common.sh@850 -- # return 0 00:29:45.669 19:23:01 -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:29:45.669 19:23:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.669 19:23:01 -- common/autotest_common.sh@10 -- # set +x 00:29:45.669 Malloc_STAT 00:29:45.669 19:23:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.669 19:23:01 -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:29:45.669 19:23:01 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_STAT 00:29:45.669 19:23:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:45.669 19:23:01 -- common/autotest_common.sh@887 -- # local i 00:29:45.669 19:23:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:45.669 19:23:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:45.669 19:23:01 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:29:45.669 19:23:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.669 19:23:01 -- common/autotest_common.sh@10 -- # set +x 00:29:45.669 19:23:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.669 19:23:01 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:29:45.669 19:23:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.669 19:23:01 -- common/autotest_common.sh@10 -- # set +x 00:29:45.669 [ 00:29:45.669 { 00:29:45.669 "name": "Malloc_STAT", 00:29:45.669 "aliases": [ 00:29:45.669 "5957571f-87ec-47a1-92e1-a10f2ad789fe" 00:29:45.669 ], 00:29:45.669 "product_name": "Malloc disk", 00:29:45.669 "block_size": 512, 00:29:45.669 "num_blocks": 262144, 00:29:45.669 "uuid": "5957571f-87ec-47a1-92e1-a10f2ad789fe", 00:29:45.669 "assigned_rate_limits": { 00:29:45.669 "rw_ios_per_sec": 0, 00:29:45.669 "rw_mbytes_per_sec": 0, 00:29:45.669 "r_mbytes_per_sec": 0, 00:29:45.669 "w_mbytes_per_sec": 0 00:29:45.669 }, 00:29:45.669 "claimed": false, 00:29:45.669 "zoned": false, 00:29:45.669 "supported_io_types": { 00:29:45.669 "read": true, 00:29:45.669 "write": true, 00:29:45.669 "unmap": true, 00:29:45.669 "write_zeroes": true, 00:29:45.669 "flush": true, 00:29:45.669 "reset": true, 00:29:45.669 "compare": false, 00:29:45.669 "compare_and_write": false, 00:29:45.669 "abort": true, 00:29:45.669 "nvme_admin": false, 00:29:45.669 "nvme_io": false 00:29:45.669 }, 00:29:45.669 "memory_domains": [ 00:29:45.669 { 00:29:45.669 "dma_device_id": "system", 00:29:45.669 "dma_device_type": 1 00:29:45.669 }, 00:29:45.669 { 00:29:45.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:45.669 "dma_device_type": 2 00:29:45.669 } 00:29:45.669 ], 00:29:45.669 "driver_specific": {} 00:29:45.669 } 00:29:45.669 ] 00:29:45.669 19:23:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.669 19:23:01 -- common/autotest_common.sh@893 -- # return 0 00:29:45.669 19:23:01 -- bdev/blockdev.sh@605 -- # sleep 2 00:29:45.669 19:23:01 -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:45.927 Running I/O for 10 seconds... 00:29:47.827 19:23:03 -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:29:47.827 19:23:03 -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:29:47.827 19:23:03 -- bdev/blockdev.sh@560 -- # local iostats 00:29:47.827 19:23:03 -- bdev/blockdev.sh@561 -- # local io_count1 00:29:47.827 19:23:03 -- bdev/blockdev.sh@562 -- # local io_count2 00:29:47.827 19:23:03 -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:29:47.827 19:23:03 -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:29:47.827 19:23:03 -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:29:47.827 19:23:03 -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:29:47.827 19:23:03 -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:29:47.827 19:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.827 19:23:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.827 19:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.827 19:23:03 -- bdev/blockdev.sh@568 -- # iostats='{ 00:29:47.827 "tick_rate": 2100000000, 00:29:47.827 "ticks": 1857737108342, 00:29:47.827 "bdevs": [ 00:29:47.827 { 00:29:47.827 "name": "Malloc_STAT", 00:29:47.827 "bytes_read": 812683776, 00:29:47.827 "num_read_ops": 198403, 00:29:47.827 "bytes_written": 0, 00:29:47.827 "num_write_ops": 0, 00:29:47.827 "bytes_unmapped": 0, 00:29:47.827 "num_unmap_ops": 0, 00:29:47.827 "bytes_copied": 0, 00:29:47.827 "num_copy_ops": 0, 00:29:47.827 "read_latency_ticks": 2029707147782, 00:29:47.827 "max_read_latency_ticks": 14818032, 00:29:47.827 "min_read_latency_ticks": 318866, 00:29:47.827 "write_latency_ticks": 0, 00:29:47.827 "max_write_latency_ticks": 0, 00:29:47.827 "min_write_latency_ticks": 0, 00:29:47.827 "unmap_latency_ticks": 0, 00:29:47.827 "max_unmap_latency_ticks": 0, 00:29:47.827 "min_unmap_latency_ticks": 0, 00:29:47.827 "copy_latency_ticks": 0, 00:29:47.827 "max_copy_latency_ticks": 0, 00:29:47.827 "min_copy_latency_ticks": 0, 00:29:47.827 "io_error": {} 00:29:47.827 } 00:29:47.827 ] 00:29:47.827 }' 00:29:47.827 19:23:03 -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:29:47.827 19:23:03 -- bdev/blockdev.sh@569 -- # io_count1=198403 00:29:47.827 19:23:03 -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:29:47.827 19:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.827 19:23:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.827 19:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.827 19:23:03 -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:29:47.827 "tick_rate": 2100000000, 00:29:47.827 "ticks": 1857862478012, 00:29:47.827 "name": "Malloc_STAT", 00:29:47.827 "channels": [ 00:29:47.827 { 00:29:47.827 "thread_id": 2, 00:29:47.827 "bytes_read": 420478976, 00:29:47.827 "num_read_ops": 102656, 00:29:47.827 "bytes_written": 0, 00:29:47.827 "num_write_ops": 0, 00:29:47.827 "bytes_unmapped": 0, 00:29:47.827 "num_unmap_ops": 0, 00:29:47.827 "bytes_copied": 0, 00:29:47.827 "num_copy_ops": 0, 00:29:47.827 "read_latency_ticks": 1046245587882, 00:29:47.827 "max_read_latency_ticks": 14818032, 00:29:47.827 "min_read_latency_ticks": 7533766, 00:29:47.827 "write_latency_ticks": 0, 00:29:47.827 "max_write_latency_ticks": 0, 00:29:47.827 "min_write_latency_ticks": 0, 00:29:47.827 "unmap_latency_ticks": 0, 00:29:47.827 "max_unmap_latency_ticks": 0, 00:29:47.827 "min_unmap_latency_ticks": 0, 00:29:47.827 "copy_latency_ticks": 0, 00:29:47.827 "max_copy_latency_ticks": 0, 00:29:47.827 "min_copy_latency_ticks": 0 00:29:47.827 }, 00:29:47.827 { 00:29:47.827 "thread_id": 3, 00:29:47.827 "bytes_read": 418381824, 00:29:47.827 "num_read_ops": 102144, 00:29:47.827 "bytes_written": 0, 00:29:47.827 "num_write_ops": 0, 00:29:47.827 "bytes_unmapped": 0, 00:29:47.827 "num_unmap_ops": 0, 00:29:47.827 "bytes_copied": 0, 00:29:47.827 "num_copy_ops": 0, 00:29:47.827 "read_latency_ticks": 1047951094270, 00:29:47.827 "max_read_latency_ticks": 13010982, 00:29:47.827 "min_read_latency_ticks": 7507046, 00:29:47.827 "write_latency_ticks": 0, 00:29:47.827 "max_write_latency_ticks": 0, 00:29:47.827 "min_write_latency_ticks": 0, 00:29:47.827 "unmap_latency_ticks": 0, 00:29:47.827 "max_unmap_latency_ticks": 0, 00:29:47.827 "min_unmap_latency_ticks": 0, 00:29:47.827 "copy_latency_ticks": 0, 00:29:47.827 "max_copy_latency_ticks": 0, 00:29:47.827 "min_copy_latency_ticks": 0 00:29:47.827 } 00:29:47.827 ] 00:29:47.827 }' 00:29:47.827 19:23:03 -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:29:47.827 19:23:03 -- bdev/blockdev.sh@572 -- # io_count_per_channel1=102656 00:29:47.827 19:23:03 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=102656 00:29:47.827 19:23:03 -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:29:47.827 19:23:03 -- bdev/blockdev.sh@574 -- # io_count_per_channel2=102144 00:29:47.828 19:23:03 -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=204800 00:29:47.828 19:23:03 -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:29:47.828 19:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.828 19:23:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.828 19:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.828 19:23:03 -- bdev/blockdev.sh@577 -- # iostats='{ 00:29:47.828 "tick_rate": 2100000000, 00:29:47.828 "ticks": 1858118703178, 00:29:47.828 "bdevs": [ 00:29:47.828 { 00:29:47.828 "name": "Malloc_STAT", 00:29:47.828 "bytes_read": 891326976, 00:29:47.828 "num_read_ops": 217603, 00:29:47.828 "bytes_written": 0, 00:29:47.828 "num_write_ops": 0, 00:29:47.828 "bytes_unmapped": 0, 00:29:47.828 "num_unmap_ops": 0, 00:29:47.828 "bytes_copied": 0, 00:29:47.828 "num_copy_ops": 0, 00:29:47.828 "read_latency_ticks": 2225832272696, 00:29:47.828 "max_read_latency_ticks": 14818032, 00:29:47.828 "min_read_latency_ticks": 318866, 00:29:47.828 "write_latency_ticks": 0, 00:29:47.828 "max_write_latency_ticks": 0, 00:29:47.828 "min_write_latency_ticks": 0, 00:29:47.828 "unmap_latency_ticks": 0, 00:29:47.828 "max_unmap_latency_ticks": 0, 00:29:47.828 "min_unmap_latency_ticks": 0, 00:29:47.828 "copy_latency_ticks": 0, 00:29:47.828 "max_copy_latency_ticks": 0, 00:29:47.828 "min_copy_latency_ticks": 0, 00:29:47.828 "io_error": {} 00:29:47.828 } 00:29:47.828 ] 00:29:47.828 }' 00:29:47.828 19:23:03 -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:29:48.085 19:23:03 -- bdev/blockdev.sh@578 -- # io_count2=217603 00:29:48.085 19:23:03 -- bdev/blockdev.sh@583 -- # '[' 204800 -lt 198403 ']' 00:29:48.085 19:23:03 -- bdev/blockdev.sh@583 -- # '[' 204800 -gt 217603 ']' 00:29:48.085 19:23:03 -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:29:48.085 19:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.085 19:23:03 -- common/autotest_common.sh@10 -- # set +x 00:29:48.085 00:29:48.085 Latency(us) 00:29:48.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.085 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:29:48.085 Malloc_STAT : 2.15 52661.71 205.71 0.00 0.00 4850.03 1053.26 7084.13 00:29:48.086 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:29:48.086 Malloc_STAT : 2.15 52252.02 204.11 0.00 0.00 4888.12 1037.65 6210.32 00:29:48.086 =================================================================================================================== 00:29:48.086 Total : 104913.72 409.82 0.00 0.00 4869.01 1037.65 7084.13 00:29:48.086 0 00:29:48.086 19:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.086 19:23:03 -- bdev/blockdev.sh@609 -- # killprocess 120234 00:29:48.086 19:23:03 -- common/autotest_common.sh@936 -- # '[' -z 120234 ']' 00:29:48.086 19:23:03 -- common/autotest_common.sh@940 -- # kill -0 120234 00:29:48.086 19:23:03 -- common/autotest_common.sh@941 -- # uname 00:29:48.086 19:23:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:48.086 19:23:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120234 00:29:48.086 19:23:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:48.086 19:23:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:48.086 19:23:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120234' 00:29:48.086 killing process with pid 120234 00:29:48.086 19:23:03 -- common/autotest_common.sh@955 -- # kill 120234 00:29:48.086 Received shutdown signal, test time was about 2.327315 seconds 00:29:48.086 00:29:48.086 Latency(us) 00:29:48.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.086 =================================================================================================================== 00:29:48.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.086 19:23:03 -- common/autotest_common.sh@960 -- # wait 120234 00:29:49.995 19:23:05 -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:29:49.995 00:29:49.995 real 0m5.452s 00:29:49.995 user 0m10.242s 00:29:49.995 sys 0m0.420s 00:29:49.995 19:23:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:49.995 ************************************ 00:29:49.995 END TEST bdev_stat 00:29:49.995 ************************************ 00:29:49.995 19:23:05 -- common/autotest_common.sh@10 -- # set +x 00:29:49.995 19:23:05 -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:29:49.995 19:23:05 -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:29:49.995 19:23:05 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:29:49.995 19:23:05 -- bdev/blockdev.sh@811 -- # cleanup 00:29:49.995 19:23:05 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:49.995 19:23:05 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:49.995 19:23:05 -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:29:49.995 19:23:05 -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:29:49.995 19:23:05 -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:29:49.995 19:23:05 -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:29:49.995 00:29:49.995 real 2m43.656s 00:29:49.995 user 6m18.433s 00:29:49.995 sys 0m25.011s 00:29:49.995 19:23:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:49.995 19:23:05 -- common/autotest_common.sh@10 -- # set +x 00:29:49.995 ************************************ 00:29:49.995 END TEST blockdev_general 00:29:49.995 ************************************ 00:29:49.995 19:23:05 -- spdk/autotest.sh@186 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:29:49.995 19:23:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:49.995 19:23:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:49.995 19:23:05 -- common/autotest_common.sh@10 -- # set +x 00:29:49.995 ************************************ 00:29:49.995 START TEST bdev_raid 00:29:49.995 ************************************ 00:29:49.995 19:23:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:29:50.254 * Looking for test storage... 00:29:50.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:50.254 19:23:05 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:50.254 19:23:05 -- bdev/nbd_common.sh@6 -- # set -e 00:29:50.254 19:23:05 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:29:50.254 19:23:05 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:29:50.254 19:23:05 -- bdev/bdev_raid.sh@716 -- # uname -s 00:29:50.254 19:23:05 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:29:50.254 19:23:05 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:29:50.254 19:23:05 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:29:50.254 19:23:05 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:29:50.254 19:23:05 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:29:50.254 19:23:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:50.254 19:23:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:50.254 19:23:05 -- common/autotest_common.sh@10 -- # set +x 00:29:50.254 ************************************ 00:29:50.254 START TEST raid_function_test_raid0 00:29:50.254 ************************************ 00:29:50.254 19:23:06 -- common/autotest_common.sh@1111 -- # raid_function_test raid0 00:29:50.254 19:23:06 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:29:50.254 19:23:06 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:29:50.254 19:23:06 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:29:50.254 19:23:06 -- bdev/bdev_raid.sh@86 -- # raid_pid=120409 00:29:50.254 Process raid pid: 120409 00:29:50.254 19:23:06 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:29:50.254 19:23:06 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 120409' 00:29:50.254 19:23:06 -- bdev/bdev_raid.sh@88 -- # waitforlisten 120409 /var/tmp/spdk-raid.sock 00:29:50.254 19:23:06 -- common/autotest_common.sh@817 -- # '[' -z 120409 ']' 00:29:50.254 19:23:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:50.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:50.254 19:23:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:50.254 19:23:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:50.254 19:23:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:50.254 19:23:06 -- common/autotest_common.sh@10 -- # set +x 00:29:50.254 [2024-04-18 19:23:06.122827] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:29:50.254 [2024-04-18 19:23:06.123020] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.512 [2024-04-18 19:23:06.309909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.771 [2024-04-18 19:23:06.529321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.029 [2024-04-18 19:23:06.729824] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:51.288 19:23:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:51.288 19:23:07 -- common/autotest_common.sh@850 -- # return 0 00:29:51.288 19:23:07 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:29:51.288 19:23:07 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:29:51.288 19:23:07 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:29:51.288 19:23:07 -- bdev/bdev_raid.sh@70 -- # cat 00:29:51.288 19:23:07 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:29:51.546 [2024-04-18 19:23:07.355702] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:29:51.546 [2024-04-18 19:23:07.357767] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:29:51.546 [2024-04-18 19:23:07.357868] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:29:51.546 [2024-04-18 19:23:07.357879] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:51.546 [2024-04-18 19:23:07.358044] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:29:51.546 [2024-04-18 19:23:07.358359] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:29:51.546 [2024-04-18 19:23:07.358380] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:29:51.546 [2024-04-18 19:23:07.358541] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:51.546 Base_1 00:29:51.546 Base_2 00:29:51.546 19:23:07 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:29:51.546 19:23:07 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:51.546 19:23:07 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:29:51.805 19:23:07 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:29:51.805 19:23:07 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:29:51.805 19:23:07 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:29:51.805 19:23:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:51.805 19:23:07 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:51.805 19:23:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:51.805 19:23:07 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:51.805 19:23:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:51.805 19:23:07 -- bdev/nbd_common.sh@12 -- # local i 00:29:51.805 19:23:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:51.805 19:23:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:51.805 19:23:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:29:52.064 [2024-04-18 19:23:07.827828] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:29:52.064 /dev/nbd0 00:29:52.064 19:23:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:52.064 19:23:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:52.064 19:23:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:52.064 19:23:07 -- common/autotest_common.sh@855 -- # local i 00:29:52.064 19:23:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:52.064 19:23:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:52.064 19:23:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:52.064 19:23:07 -- common/autotest_common.sh@859 -- # break 00:29:52.064 19:23:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:52.064 19:23:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:52.064 19:23:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.064 1+0 records in 00:29:52.064 1+0 records out 00:29:52.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637402 s, 6.4 MB/s 00:29:52.064 19:23:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.064 19:23:07 -- common/autotest_common.sh@872 -- # size=4096 00:29:52.064 19:23:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.064 19:23:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:52.064 19:23:07 -- common/autotest_common.sh@875 -- # return 0 00:29:52.064 19:23:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.064 19:23:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:52.064 19:23:07 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:29:52.064 19:23:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:52.064 19:23:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:29:52.322 19:23:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:52.322 { 00:29:52.322 "nbd_device": "/dev/nbd0", 00:29:52.322 "bdev_name": "raid" 00:29:52.322 } 00:29:52.322 ]' 00:29:52.322 19:23:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:52.322 19:23:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:52.322 { 00:29:52.322 "nbd_device": "/dev/nbd0", 00:29:52.322 "bdev_name": "raid" 00:29:52.322 } 00:29:52.322 ]' 00:29:52.322 19:23:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:52.322 19:23:08 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:52.322 19:23:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:52.322 19:23:08 -- bdev/nbd_common.sh@65 -- # count=1 00:29:52.322 19:23:08 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@98 -- # count=1 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@20 -- # local blksize 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:29:52.322 19:23:08 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:29:52.580 4096+0 records in 00:29:52.580 4096+0 records out 00:29:52.580 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0236772 s, 88.6 MB/s 00:29:52.580 19:23:08 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:29:52.580 4096+0 records in 00:29:52.580 4096+0 records out 00:29:52.580 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.236135 s, 8.9 MB/s 00:29:52.580 19:23:08 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:29:52.580 19:23:08 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:29:52.839 128+0 records in 00:29:52.839 128+0 records out 00:29:52.839 65536 bytes (66 kB, 64 KiB) copied, 0.0013608 s, 48.2 MB/s 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:29:52.839 2035+0 records in 00:29:52.839 2035+0 records out 00:29:52.839 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00848673 s, 123 MB/s 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:29:52.839 19:23:08 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:29:52.840 19:23:08 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:29:52.840 456+0 records in 00:29:52.840 456+0 records out 00:29:52.840 233472 bytes (233 kB, 228 KiB) copied, 0.00376487 s, 62.0 MB/s 00:29:52.840 19:23:08 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:29:52.840 19:23:08 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:29:52.840 19:23:08 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:29:52.840 19:23:08 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:29:52.840 19:23:08 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:29:52.840 19:23:08 -- bdev/bdev_raid.sh@53 -- # return 0 00:29:52.840 19:23:08 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:52.840 19:23:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:52.840 19:23:08 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:52.840 19:23:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:52.840 19:23:08 -- bdev/nbd_common.sh@51 -- # local i 00:29:52.840 19:23:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:52.840 19:23:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:53.098 [2024-04-18 19:23:08.865044] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@41 -- # break 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@45 -- # return 0 00:29:53.098 19:23:08 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:53.098 19:23:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:29:53.356 19:23:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:53.356 19:23:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:53.356 19:23:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:53.356 19:23:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:53.356 19:23:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:53.356 19:23:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:53.356 19:23:09 -- bdev/nbd_common.sh@65 -- # true 00:29:53.356 19:23:09 -- bdev/nbd_common.sh@65 -- # count=0 00:29:53.356 19:23:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:53.356 19:23:09 -- bdev/bdev_raid.sh@106 -- # count=0 00:29:53.356 19:23:09 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:29:53.356 19:23:09 -- bdev/bdev_raid.sh@111 -- # killprocess 120409 00:29:53.356 19:23:09 -- common/autotest_common.sh@936 -- # '[' -z 120409 ']' 00:29:53.356 19:23:09 -- common/autotest_common.sh@940 -- # kill -0 120409 00:29:53.356 19:23:09 -- common/autotest_common.sh@941 -- # uname 00:29:53.356 19:23:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:53.356 19:23:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120409 00:29:53.356 killing process with pid 120409 00:29:53.356 19:23:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:53.356 19:23:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:53.356 19:23:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120409' 00:29:53.356 19:23:09 -- common/autotest_common.sh@955 -- # kill 120409 00:29:53.356 19:23:09 -- common/autotest_common.sh@960 -- # wait 120409 00:29:53.356 [2024-04-18 19:23:09.160499] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:53.356 [2024-04-18 19:23:09.160612] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:53.356 [2024-04-18 19:23:09.160665] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:53.356 [2024-04-18 19:23:09.160791] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:29:53.615 [2024-04-18 19:23:09.378524] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:55.003 ************************************ 00:29:55.003 END TEST raid_function_test_raid0 00:29:55.003 ************************************ 00:29:55.003 19:23:10 -- bdev/bdev_raid.sh@113 -- # return 0 00:29:55.003 00:29:55.003 real 0m4.832s 00:29:55.003 user 0m5.897s 00:29:55.003 sys 0m1.055s 00:29:55.003 19:23:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:55.003 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:29:55.003 19:23:10 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:29:55.003 19:23:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:55.003 19:23:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:55.003 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:29:55.261 ************************************ 00:29:55.261 START TEST raid_function_test_concat 00:29:55.261 ************************************ 00:29:55.261 19:23:10 -- common/autotest_common.sh@1111 -- # raid_function_test concat 00:29:55.261 19:23:10 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:29:55.261 19:23:10 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:29:55.261 19:23:10 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:29:55.261 19:23:10 -- bdev/bdev_raid.sh@86 -- # raid_pid=120597 00:29:55.261 19:23:10 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:29:55.261 19:23:10 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 120597' 00:29:55.261 Process raid pid: 120597 00:29:55.261 19:23:10 -- bdev/bdev_raid.sh@88 -- # waitforlisten 120597 /var/tmp/spdk-raid.sock 00:29:55.261 19:23:10 -- common/autotest_common.sh@817 -- # '[' -z 120597 ']' 00:29:55.261 19:23:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:55.261 19:23:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:55.261 19:23:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:55.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:55.261 19:23:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:55.261 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:29:55.261 [2024-04-18 19:23:11.056672] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:29:55.261 [2024-04-18 19:23:11.057117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.520 [2024-04-18 19:23:11.237905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.778 [2024-04-18 19:23:11.509532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.036 [2024-04-18 19:23:11.741016] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:56.294 19:23:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:56.294 19:23:12 -- common/autotest_common.sh@850 -- # return 0 00:29:56.294 19:23:12 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:29:56.294 19:23:12 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:29:56.294 19:23:12 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:29:56.294 19:23:12 -- bdev/bdev_raid.sh@70 -- # cat 00:29:56.294 19:23:12 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:29:56.553 [2024-04-18 19:23:12.350959] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:29:56.553 [2024-04-18 19:23:12.353138] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:29:56.553 [2024-04-18 19:23:12.353325] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:29:56.553 [2024-04-18 19:23:12.353416] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:56.553 [2024-04-18 19:23:12.353600] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:29:56.553 [2024-04-18 19:23:12.354061] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:29:56.553 [2024-04-18 19:23:12.354201] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:29:56.553 [2024-04-18 19:23:12.354437] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:56.553 Base_1 00:29:56.553 Base_2 00:29:56.553 19:23:12 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:29:56.553 19:23:12 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:56.553 19:23:12 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:29:56.811 19:23:12 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:29:56.811 19:23:12 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:29:56.811 19:23:12 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:29:56.811 19:23:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:56.811 19:23:12 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:56.811 19:23:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:56.811 19:23:12 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:56.811 19:23:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:56.811 19:23:12 -- bdev/nbd_common.sh@12 -- # local i 00:29:56.811 19:23:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:56.811 19:23:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:56.811 19:23:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:29:57.121 [2024-04-18 19:23:12.847129] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:29:57.121 /dev/nbd0 00:29:57.121 19:23:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:57.121 19:23:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:57.121 19:23:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:57.121 19:23:12 -- common/autotest_common.sh@855 -- # local i 00:29:57.121 19:23:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:57.121 19:23:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:57.121 19:23:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:57.121 19:23:12 -- common/autotest_common.sh@859 -- # break 00:29:57.121 19:23:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:57.121 19:23:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:57.121 19:23:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:57.121 1+0 records in 00:29:57.121 1+0 records out 00:29:57.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417119 s, 9.8 MB/s 00:29:57.121 19:23:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:57.121 19:23:12 -- common/autotest_common.sh@872 -- # size=4096 00:29:57.121 19:23:12 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:57.121 19:23:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:57.121 19:23:12 -- common/autotest_common.sh@875 -- # return 0 00:29:57.121 19:23:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:57.121 19:23:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:57.121 19:23:12 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:29:57.121 19:23:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:57.121 19:23:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:29:57.385 19:23:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:57.385 { 00:29:57.385 "nbd_device": "/dev/nbd0", 00:29:57.385 "bdev_name": "raid" 00:29:57.385 } 00:29:57.385 ]' 00:29:57.385 19:23:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:57.385 { 00:29:57.385 "nbd_device": "/dev/nbd0", 00:29:57.385 "bdev_name": "raid" 00:29:57.385 } 00:29:57.385 ]' 00:29:57.385 19:23:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:57.385 19:23:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:57.385 19:23:13 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:57.385 19:23:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:57.385 19:23:13 -- bdev/nbd_common.sh@65 -- # count=1 00:29:57.385 19:23:13 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@98 -- # count=1 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@20 -- # local blksize 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:29:57.385 4096+0 records in 00:29:57.385 4096+0 records out 00:29:57.385 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0327758 s, 64.0 MB/s 00:29:57.385 19:23:13 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:29:57.644 4096+0 records in 00:29:57.644 4096+0 records out 00:29:57.644 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.29969 s, 7.0 MB/s 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:29:57.644 128+0 records in 00:29:57.644 128+0 records out 00:29:57.644 65536 bytes (66 kB, 64 KiB) copied, 0.00104834 s, 62.5 MB/s 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:29:57.644 19:23:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:29:57.903 2035+0 records in 00:29:57.903 2035+0 records out 00:29:57.903 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0143397 s, 72.7 MB/s 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:29:57.903 456+0 records in 00:29:57.903 456+0 records out 00:29:57.903 233472 bytes (233 kB, 228 KiB) copied, 0.0030493 s, 76.6 MB/s 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@53 -- # return 0 00:29:57.903 19:23:13 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:57.903 19:23:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:57.903 19:23:13 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:57.903 19:23:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:57.903 19:23:13 -- bdev/nbd_common.sh@51 -- # local i 00:29:57.903 19:23:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.903 19:23:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:58.161 [2024-04-18 19:23:13.863826] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@41 -- # break 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@45 -- # return 0 00:29:58.161 19:23:13 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:58.161 19:23:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:29:58.420 19:23:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:58.420 19:23:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:58.420 19:23:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:58.420 19:23:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:58.420 19:23:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:58.420 19:23:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:58.420 19:23:14 -- bdev/nbd_common.sh@65 -- # true 00:29:58.420 19:23:14 -- bdev/nbd_common.sh@65 -- # count=0 00:29:58.420 19:23:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:58.420 19:23:14 -- bdev/bdev_raid.sh@106 -- # count=0 00:29:58.420 19:23:14 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:29:58.420 19:23:14 -- bdev/bdev_raid.sh@111 -- # killprocess 120597 00:29:58.420 19:23:14 -- common/autotest_common.sh@936 -- # '[' -z 120597 ']' 00:29:58.420 19:23:14 -- common/autotest_common.sh@940 -- # kill -0 120597 00:29:58.420 19:23:14 -- common/autotest_common.sh@941 -- # uname 00:29:58.420 19:23:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:58.420 19:23:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120597 00:29:58.420 killing process with pid 120597 00:29:58.420 19:23:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:58.420 19:23:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:58.420 19:23:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120597' 00:29:58.420 19:23:14 -- common/autotest_common.sh@955 -- # kill 120597 00:29:58.420 [2024-04-18 19:23:14.231422] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:58.420 19:23:14 -- common/autotest_common.sh@960 -- # wait 120597 00:29:58.420 [2024-04-18 19:23:14.231528] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:58.420 [2024-04-18 19:23:14.231589] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:58.420 [2024-04-18 19:23:14.231601] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:29:58.678 [2024-04-18 19:23:14.448577] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:00.055 ************************************ 00:30:00.055 END TEST raid_function_test_concat 00:30:00.055 ************************************ 00:30:00.055 19:23:15 -- bdev/bdev_raid.sh@113 -- # return 0 00:30:00.055 00:30:00.055 real 0m4.924s 00:30:00.055 user 0m6.066s 00:30:00.055 sys 0m1.081s 00:30:00.055 19:23:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:00.055 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:30:00.055 19:23:15 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:30:00.055 19:23:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:00.055 19:23:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:00.055 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:30:00.055 ************************************ 00:30:00.314 START TEST raid0_resize_test 00:30:00.314 ************************************ 00:30:00.314 19:23:15 -- common/autotest_common.sh@1111 -- # raid0_resize_test 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@301 -- # raid_pid=120767 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 120767' 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:00.314 Process raid pid: 120767 00:30:00.314 19:23:15 -- bdev/bdev_raid.sh@303 -- # waitforlisten 120767 /var/tmp/spdk-raid.sock 00:30:00.314 19:23:15 -- common/autotest_common.sh@817 -- # '[' -z 120767 ']' 00:30:00.314 19:23:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:00.314 19:23:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:00.314 19:23:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:00.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:00.314 19:23:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:00.314 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:30:00.314 [2024-04-18 19:23:16.062249] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:30:00.314 [2024-04-18 19:23:16.062457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.573 [2024-04-18 19:23:16.243052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.831 [2024-04-18 19:23:16.507127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.831 [2024-04-18 19:23:16.748892] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:01.090 19:23:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:01.090 19:23:16 -- common/autotest_common.sh@850 -- # return 0 00:30:01.090 19:23:16 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:30:01.348 Base_1 00:30:01.348 19:23:17 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:30:01.607 Base_2 00:30:01.607 19:23:17 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:30:01.865 [2024-04-18 19:23:17.719300] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:30:01.865 [2024-04-18 19:23:17.721439] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:30:01.865 [2024-04-18 19:23:17.721509] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:30:01.865 [2024-04-18 19:23:17.721521] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:01.865 [2024-04-18 19:23:17.721704] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:30:01.865 [2024-04-18 19:23:17.722045] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:30:01.865 [2024-04-18 19:23:17.722056] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:30:01.865 [2024-04-18 19:23:17.722244] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:01.865 19:23:17 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:30:02.441 [2024-04-18 19:23:18.063308] bdev_raid.c:2222:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:30:02.441 [2024-04-18 19:23:18.063351] bdev_raid.c:2235:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:30:02.441 true 00:30:02.441 19:23:18 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:30:02.441 19:23:18 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:30:02.743 [2024-04-18 19:23:18.375545] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:02.743 19:23:18 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:30:02.743 19:23:18 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:30:02.743 19:23:18 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:30:02.743 19:23:18 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:30:02.743 [2024-04-18 19:23:18.627429] bdev_raid.c:2222:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:30:02.743 [2024-04-18 19:23:18.627487] bdev_raid.c:2235:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:30:02.743 [2024-04-18 19:23:18.627538] bdev_raid.c:2249:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:30:02.743 true 00:30:02.743 19:23:18 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:30:02.743 19:23:18 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:30:03.002 [2024-04-18 19:23:18.815543] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:03.002 19:23:18 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:30:03.002 19:23:18 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:30:03.002 19:23:18 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:30:03.002 19:23:18 -- bdev/bdev_raid.sh@332 -- # killprocess 120767 00:30:03.002 19:23:18 -- common/autotest_common.sh@936 -- # '[' -z 120767 ']' 00:30:03.002 19:23:18 -- common/autotest_common.sh@940 -- # kill -0 120767 00:30:03.002 19:23:18 -- common/autotest_common.sh@941 -- # uname 00:30:03.002 19:23:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:03.002 19:23:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120767 00:30:03.002 killing process with pid 120767 00:30:03.002 19:23:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:03.002 19:23:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:03.002 19:23:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120767' 00:30:03.002 19:23:18 -- common/autotest_common.sh@955 -- # kill 120767 00:30:03.002 19:23:18 -- common/autotest_common.sh@960 -- # wait 120767 00:30:03.002 [2024-04-18 19:23:18.857359] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:03.002 [2024-04-18 19:23:18.857427] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:03.002 [2024-04-18 19:23:18.857475] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:03.002 [2024-04-18 19:23:18.857484] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:30:03.002 [2024-04-18 19:23:18.858083] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:04.380 ************************************ 00:30:04.380 END TEST raid0_resize_test 00:30:04.380 ************************************ 00:30:04.380 19:23:20 -- bdev/bdev_raid.sh@334 -- # return 0 00:30:04.380 00:30:04.380 real 0m4.244s 00:30:04.380 user 0m6.055s 00:30:04.380 sys 0m0.520s 00:30:04.380 19:23:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:04.380 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:30:04.380 19:23:20 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:30:04.380 19:23:20 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:30:04.380 19:23:20 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:30:04.380 19:23:20 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:30:04.380 19:23:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:04.380 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:30:04.639 ************************************ 00:30:04.639 START TEST raid_state_function_test 00:30:04.639 ************************************ 00:30:04.639 19:23:20 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 2 false 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=120879 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120879' 00:30:04.639 Process raid pid: 120879 00:30:04.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:04.639 19:23:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120879 /var/tmp/spdk-raid.sock 00:30:04.639 19:23:20 -- common/autotest_common.sh@817 -- # '[' -z 120879 ']' 00:30:04.639 19:23:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:04.639 19:23:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:04.639 19:23:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:04.639 19:23:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:04.639 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:30:04.639 [2024-04-18 19:23:20.412060] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:30:04.639 [2024-04-18 19:23:20.412251] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.898 [2024-04-18 19:23:20.588991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.898 [2024-04-18 19:23:20.805284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.156 [2024-04-18 19:23:21.023546] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:05.723 19:23:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:05.723 19:23:21 -- common/autotest_common.sh@850 -- # return 0 00:30:05.723 19:23:21 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:05.723 [2024-04-18 19:23:21.641613] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:05.723 [2024-04-18 19:23:21.641720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:05.723 [2024-04-18 19:23:21.641751] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:05.723 [2024-04-18 19:23:21.641771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:05.982 "name": "Existed_Raid", 00:30:05.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.982 "strip_size_kb": 64, 00:30:05.982 "state": "configuring", 00:30:05.982 "raid_level": "raid0", 00:30:05.982 "superblock": false, 00:30:05.982 "num_base_bdevs": 2, 00:30:05.982 "num_base_bdevs_discovered": 0, 00:30:05.982 "num_base_bdevs_operational": 2, 00:30:05.982 "base_bdevs_list": [ 00:30:05.982 { 00:30:05.982 "name": "BaseBdev1", 00:30:05.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.982 "is_configured": false, 00:30:05.982 "data_offset": 0, 00:30:05.982 "data_size": 0 00:30:05.982 }, 00:30:05.982 { 00:30:05.982 "name": "BaseBdev2", 00:30:05.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.982 "is_configured": false, 00:30:05.982 "data_offset": 0, 00:30:05.982 "data_size": 0 00:30:05.982 } 00:30:05.982 ] 00:30:05.982 }' 00:30:05.982 19:23:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:05.982 19:23:21 -- common/autotest_common.sh@10 -- # set +x 00:30:06.917 19:23:22 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:07.177 [2024-04-18 19:23:23.005799] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:07.177 [2024-04-18 19:23:23.005845] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:30:07.177 19:23:23 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:07.438 [2024-04-18 19:23:23.229856] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:07.438 [2024-04-18 19:23:23.229945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:07.438 [2024-04-18 19:23:23.229957] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:07.438 [2024-04-18 19:23:23.229983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:07.438 19:23:23 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:07.697 [2024-04-18 19:23:23.567226] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:07.697 BaseBdev1 00:30:07.697 19:23:23 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:30:07.697 19:23:23 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:30:07.697 19:23:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:07.697 19:23:23 -- common/autotest_common.sh@887 -- # local i 00:30:07.697 19:23:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:07.697 19:23:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:07.697 19:23:23 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:07.955 19:23:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:08.215 [ 00:30:08.215 { 00:30:08.215 "name": "BaseBdev1", 00:30:08.215 "aliases": [ 00:30:08.215 "4f1426ef-0fa8-4cbf-9dca-256bf8495ffb" 00:30:08.215 ], 00:30:08.215 "product_name": "Malloc disk", 00:30:08.215 "block_size": 512, 00:30:08.215 "num_blocks": 65536, 00:30:08.215 "uuid": "4f1426ef-0fa8-4cbf-9dca-256bf8495ffb", 00:30:08.215 "assigned_rate_limits": { 00:30:08.215 "rw_ios_per_sec": 0, 00:30:08.215 "rw_mbytes_per_sec": 0, 00:30:08.215 "r_mbytes_per_sec": 0, 00:30:08.215 "w_mbytes_per_sec": 0 00:30:08.215 }, 00:30:08.215 "claimed": true, 00:30:08.215 "claim_type": "exclusive_write", 00:30:08.215 "zoned": false, 00:30:08.215 "supported_io_types": { 00:30:08.215 "read": true, 00:30:08.215 "write": true, 00:30:08.215 "unmap": true, 00:30:08.215 "write_zeroes": true, 00:30:08.215 "flush": true, 00:30:08.215 "reset": true, 00:30:08.215 "compare": false, 00:30:08.215 "compare_and_write": false, 00:30:08.215 "abort": true, 00:30:08.215 "nvme_admin": false, 00:30:08.215 "nvme_io": false 00:30:08.215 }, 00:30:08.215 "memory_domains": [ 00:30:08.215 { 00:30:08.215 "dma_device_id": "system", 00:30:08.215 "dma_device_type": 1 00:30:08.215 }, 00:30:08.215 { 00:30:08.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.215 "dma_device_type": 2 00:30:08.215 } 00:30:08.215 ], 00:30:08.215 "driver_specific": {} 00:30:08.215 } 00:30:08.215 ] 00:30:08.215 19:23:23 -- common/autotest_common.sh@893 -- # return 0 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:08.215 19:23:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:08.215 19:23:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.215 19:23:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:08.474 19:23:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:08.474 "name": "Existed_Raid", 00:30:08.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.474 "strip_size_kb": 64, 00:30:08.474 "state": "configuring", 00:30:08.474 "raid_level": "raid0", 00:30:08.474 "superblock": false, 00:30:08.474 "num_base_bdevs": 2, 00:30:08.474 "num_base_bdevs_discovered": 1, 00:30:08.474 "num_base_bdevs_operational": 2, 00:30:08.474 "base_bdevs_list": [ 00:30:08.474 { 00:30:08.474 "name": "BaseBdev1", 00:30:08.474 "uuid": "4f1426ef-0fa8-4cbf-9dca-256bf8495ffb", 00:30:08.474 "is_configured": true, 00:30:08.474 "data_offset": 0, 00:30:08.474 "data_size": 65536 00:30:08.474 }, 00:30:08.474 { 00:30:08.474 "name": "BaseBdev2", 00:30:08.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.474 "is_configured": false, 00:30:08.474 "data_offset": 0, 00:30:08.474 "data_size": 0 00:30:08.474 } 00:30:08.474 ] 00:30:08.474 }' 00:30:08.474 19:23:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:08.474 19:23:24 -- common/autotest_common.sh@10 -- # set +x 00:30:09.041 19:23:24 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:09.300 [2024-04-18 19:23:25.083734] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:09.300 [2024-04-18 19:23:25.083798] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:30:09.300 19:23:25 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:30:09.300 19:23:25 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:09.557 [2024-04-18 19:23:25.359806] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:09.557 [2024-04-18 19:23:25.361862] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:09.557 [2024-04-18 19:23:25.361943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:09.557 19:23:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:30:09.557 19:23:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:09.557 19:23:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:30:09.557 19:23:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:09.557 19:23:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:09.558 19:23:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:09.558 19:23:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:09.558 19:23:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:09.558 19:23:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:09.558 19:23:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:09.558 19:23:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:09.558 19:23:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:09.558 19:23:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:09.558 19:23:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.891 19:23:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:09.891 "name": "Existed_Raid", 00:30:09.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.891 "strip_size_kb": 64, 00:30:09.891 "state": "configuring", 00:30:09.891 "raid_level": "raid0", 00:30:09.891 "superblock": false, 00:30:09.891 "num_base_bdevs": 2, 00:30:09.891 "num_base_bdevs_discovered": 1, 00:30:09.891 "num_base_bdevs_operational": 2, 00:30:09.891 "base_bdevs_list": [ 00:30:09.891 { 00:30:09.891 "name": "BaseBdev1", 00:30:09.891 "uuid": "4f1426ef-0fa8-4cbf-9dca-256bf8495ffb", 00:30:09.891 "is_configured": true, 00:30:09.891 "data_offset": 0, 00:30:09.891 "data_size": 65536 00:30:09.891 }, 00:30:09.891 { 00:30:09.891 "name": "BaseBdev2", 00:30:09.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.891 "is_configured": false, 00:30:09.891 "data_offset": 0, 00:30:09.891 "data_size": 0 00:30:09.891 } 00:30:09.891 ] 00:30:09.891 }' 00:30:09.891 19:23:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:09.891 19:23:25 -- common/autotest_common.sh@10 -- # set +x 00:30:10.828 19:23:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:10.828 [2024-04-18 19:23:26.753951] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:10.828 [2024-04-18 19:23:26.754003] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:30:10.828 [2024-04-18 19:23:26.754025] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:10.828 [2024-04-18 19:23:26.754199] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:30:10.828 [2024-04-18 19:23:26.754535] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:30:10.828 [2024-04-18 19:23:26.754548] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:30:10.828 [2024-04-18 19:23:26.754871] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:10.828 BaseBdev2 00:30:11.087 19:23:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:30:11.087 19:23:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:30:11.087 19:23:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:11.087 19:23:26 -- common/autotest_common.sh@887 -- # local i 00:30:11.087 19:23:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:11.087 19:23:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:11.087 19:23:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:11.346 19:23:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:11.606 [ 00:30:11.606 { 00:30:11.606 "name": "BaseBdev2", 00:30:11.606 "aliases": [ 00:30:11.606 "b86dfeb4-f888-4f6a-bcb0-eca5e3315c83" 00:30:11.606 ], 00:30:11.606 "product_name": "Malloc disk", 00:30:11.606 "block_size": 512, 00:30:11.606 "num_blocks": 65536, 00:30:11.606 "uuid": "b86dfeb4-f888-4f6a-bcb0-eca5e3315c83", 00:30:11.606 "assigned_rate_limits": { 00:30:11.606 "rw_ios_per_sec": 0, 00:30:11.606 "rw_mbytes_per_sec": 0, 00:30:11.606 "r_mbytes_per_sec": 0, 00:30:11.606 "w_mbytes_per_sec": 0 00:30:11.606 }, 00:30:11.606 "claimed": true, 00:30:11.606 "claim_type": "exclusive_write", 00:30:11.606 "zoned": false, 00:30:11.606 "supported_io_types": { 00:30:11.606 "read": true, 00:30:11.606 "write": true, 00:30:11.606 "unmap": true, 00:30:11.606 "write_zeroes": true, 00:30:11.606 "flush": true, 00:30:11.606 "reset": true, 00:30:11.606 "compare": false, 00:30:11.606 "compare_and_write": false, 00:30:11.606 "abort": true, 00:30:11.606 "nvme_admin": false, 00:30:11.606 "nvme_io": false 00:30:11.606 }, 00:30:11.606 "memory_domains": [ 00:30:11.606 { 00:30:11.606 "dma_device_id": "system", 00:30:11.606 "dma_device_type": 1 00:30:11.606 }, 00:30:11.606 { 00:30:11.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.606 "dma_device_type": 2 00:30:11.606 } 00:30:11.606 ], 00:30:11.606 "driver_specific": {} 00:30:11.606 } 00:30:11.606 ] 00:30:11.606 19:23:27 -- common/autotest_common.sh@893 -- # return 0 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.606 19:23:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:11.866 19:23:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:11.866 "name": "Existed_Raid", 00:30:11.866 "uuid": "3900b075-baa2-4c81-8c37-7fc5c1676fe0", 00:30:11.866 "strip_size_kb": 64, 00:30:11.866 "state": "online", 00:30:11.866 "raid_level": "raid0", 00:30:11.866 "superblock": false, 00:30:11.866 "num_base_bdevs": 2, 00:30:11.866 "num_base_bdevs_discovered": 2, 00:30:11.866 "num_base_bdevs_operational": 2, 00:30:11.866 "base_bdevs_list": [ 00:30:11.866 { 00:30:11.866 "name": "BaseBdev1", 00:30:11.866 "uuid": "4f1426ef-0fa8-4cbf-9dca-256bf8495ffb", 00:30:11.866 "is_configured": true, 00:30:11.866 "data_offset": 0, 00:30:11.866 "data_size": 65536 00:30:11.866 }, 00:30:11.866 { 00:30:11.866 "name": "BaseBdev2", 00:30:11.866 "uuid": "b86dfeb4-f888-4f6a-bcb0-eca5e3315c83", 00:30:11.866 "is_configured": true, 00:30:11.866 "data_offset": 0, 00:30:11.866 "data_size": 65536 00:30:11.866 } 00:30:11.866 ] 00:30:11.866 }' 00:30:11.866 19:23:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:11.866 19:23:27 -- common/autotest_common.sh@10 -- # set +x 00:30:12.434 19:23:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:12.692 [2024-04-18 19:23:28.390460] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:12.692 [2024-04-18 19:23:28.390507] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:12.692 [2024-04-18 19:23:28.390572] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@197 -- # return 1 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.692 19:23:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:12.950 19:23:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:12.950 "name": "Existed_Raid", 00:30:12.950 "uuid": "3900b075-baa2-4c81-8c37-7fc5c1676fe0", 00:30:12.950 "strip_size_kb": 64, 00:30:12.950 "state": "offline", 00:30:12.950 "raid_level": "raid0", 00:30:12.950 "superblock": false, 00:30:12.950 "num_base_bdevs": 2, 00:30:12.950 "num_base_bdevs_discovered": 1, 00:30:12.950 "num_base_bdevs_operational": 1, 00:30:12.950 "base_bdevs_list": [ 00:30:12.950 { 00:30:12.950 "name": null, 00:30:12.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.950 "is_configured": false, 00:30:12.950 "data_offset": 0, 00:30:12.950 "data_size": 65536 00:30:12.950 }, 00:30:12.950 { 00:30:12.950 "name": "BaseBdev2", 00:30:12.950 "uuid": "b86dfeb4-f888-4f6a-bcb0-eca5e3315c83", 00:30:12.950 "is_configured": true, 00:30:12.950 "data_offset": 0, 00:30:12.950 "data_size": 65536 00:30:12.950 } 00:30:12.950 ] 00:30:12.950 }' 00:30:12.950 19:23:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:12.950 19:23:28 -- common/autotest_common.sh@10 -- # set +x 00:30:13.517 19:23:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:30:13.517 19:23:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:13.517 19:23:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.517 19:23:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:30:13.775 19:23:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:30:13.775 19:23:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:13.775 19:23:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:14.033 [2024-04-18 19:23:29.879900] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:14.033 [2024-04-18 19:23:29.879993] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:30:14.291 19:23:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:30:14.291 19:23:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:14.291 19:23:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:30:14.291 19:23:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.549 19:23:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:30:14.549 19:23:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:30:14.549 19:23:30 -- bdev/bdev_raid.sh@287 -- # killprocess 120879 00:30:14.549 19:23:30 -- common/autotest_common.sh@936 -- # '[' -z 120879 ']' 00:30:14.549 19:23:30 -- common/autotest_common.sh@940 -- # kill -0 120879 00:30:14.549 19:23:30 -- common/autotest_common.sh@941 -- # uname 00:30:14.549 19:23:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:14.549 19:23:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120879 00:30:14.549 killing process with pid 120879 00:30:14.549 19:23:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:14.549 19:23:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:14.549 19:23:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120879' 00:30:14.549 19:23:30 -- common/autotest_common.sh@955 -- # kill 120879 00:30:14.549 19:23:30 -- common/autotest_common.sh@960 -- # wait 120879 00:30:14.549 [2024-04-18 19:23:30.295287] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:14.549 [2024-04-18 19:23:30.295425] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:15.925 ************************************ 00:30:15.925 END TEST raid_state_function_test 00:30:15.925 ************************************ 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:30:15.925 00:30:15.925 real 0m11.399s 00:30:15.925 user 0m19.489s 00:30:15.925 sys 0m1.425s 00:30:15.925 19:23:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:15.925 19:23:31 -- common/autotest_common.sh@10 -- # set +x 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:30:15.925 19:23:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:30:15.925 19:23:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:15.925 19:23:31 -- common/autotest_common.sh@10 -- # set +x 00:30:15.925 ************************************ 00:30:15.925 START TEST raid_state_function_test_sb 00:30:15.925 ************************************ 00:30:15.925 19:23:31 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 2 true 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:15.925 19:23:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=121241 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121241' 00:30:15.926 Process raid pid: 121241 00:30:15.926 19:23:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121241 /var/tmp/spdk-raid.sock 00:30:15.926 19:23:31 -- common/autotest_common.sh@817 -- # '[' -z 121241 ']' 00:30:15.926 19:23:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:15.926 19:23:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:15.926 19:23:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:15.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:15.926 19:23:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:15.926 19:23:31 -- common/autotest_common.sh@10 -- # set +x 00:30:16.183 [2024-04-18 19:23:31.896880] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:30:16.184 [2024-04-18 19:23:31.897237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.184 [2024-04-18 19:23:32.065301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.442 [2024-04-18 19:23:32.290813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.699 [2024-04-18 19:23:32.504762] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:17.275 19:23:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:17.275 19:23:32 -- common/autotest_common.sh@850 -- # return 0 00:30:17.275 19:23:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:17.553 [2024-04-18 19:23:33.231873] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:17.553 [2024-04-18 19:23:33.232144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:17.553 [2024-04-18 19:23:33.232236] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:17.553 [2024-04-18 19:23:33.232333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:17.553 19:23:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.811 19:23:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:17.811 "name": "Existed_Raid", 00:30:17.811 "uuid": "7d1f256e-22f8-4f1d-81cb-d311f0008ba9", 00:30:17.811 "strip_size_kb": 64, 00:30:17.811 "state": "configuring", 00:30:17.811 "raid_level": "raid0", 00:30:17.811 "superblock": true, 00:30:17.811 "num_base_bdevs": 2, 00:30:17.811 "num_base_bdevs_discovered": 0, 00:30:17.811 "num_base_bdevs_operational": 2, 00:30:17.811 "base_bdevs_list": [ 00:30:17.811 { 00:30:17.811 "name": "BaseBdev1", 00:30:17.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.811 "is_configured": false, 00:30:17.811 "data_offset": 0, 00:30:17.811 "data_size": 0 00:30:17.811 }, 00:30:17.811 { 00:30:17.811 "name": "BaseBdev2", 00:30:17.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.811 "is_configured": false, 00:30:17.811 "data_offset": 0, 00:30:17.811 "data_size": 0 00:30:17.811 } 00:30:17.811 ] 00:30:17.811 }' 00:30:17.811 19:23:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:17.811 19:23:33 -- common/autotest_common.sh@10 -- # set +x 00:30:18.376 19:23:34 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:18.633 [2024-04-18 19:23:34.424011] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:18.633 [2024-04-18 19:23:34.424577] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:30:18.633 19:23:34 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:18.891 [2024-04-18 19:23:34.720157] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:18.891 [2024-04-18 19:23:34.720466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:18.891 [2024-04-18 19:23:34.720557] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:18.891 [2024-04-18 19:23:34.720614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:18.891 19:23:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:19.148 [2024-04-18 19:23:35.016518] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:19.148 BaseBdev1 00:30:19.148 19:23:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:30:19.148 19:23:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:30:19.148 19:23:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:19.148 19:23:35 -- common/autotest_common.sh@887 -- # local i 00:30:19.148 19:23:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:19.148 19:23:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:19.148 19:23:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:19.405 19:23:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:19.661 [ 00:30:19.661 { 00:30:19.661 "name": "BaseBdev1", 00:30:19.661 "aliases": [ 00:30:19.661 "0a19a599-c50b-488d-bfb3-90c9bf00344f" 00:30:19.661 ], 00:30:19.661 "product_name": "Malloc disk", 00:30:19.661 "block_size": 512, 00:30:19.661 "num_blocks": 65536, 00:30:19.661 "uuid": "0a19a599-c50b-488d-bfb3-90c9bf00344f", 00:30:19.661 "assigned_rate_limits": { 00:30:19.661 "rw_ios_per_sec": 0, 00:30:19.661 "rw_mbytes_per_sec": 0, 00:30:19.661 "r_mbytes_per_sec": 0, 00:30:19.661 "w_mbytes_per_sec": 0 00:30:19.661 }, 00:30:19.661 "claimed": true, 00:30:19.661 "claim_type": "exclusive_write", 00:30:19.661 "zoned": false, 00:30:19.661 "supported_io_types": { 00:30:19.661 "read": true, 00:30:19.661 "write": true, 00:30:19.661 "unmap": true, 00:30:19.661 "write_zeroes": true, 00:30:19.661 "flush": true, 00:30:19.661 "reset": true, 00:30:19.661 "compare": false, 00:30:19.661 "compare_and_write": false, 00:30:19.661 "abort": true, 00:30:19.661 "nvme_admin": false, 00:30:19.661 "nvme_io": false 00:30:19.661 }, 00:30:19.661 "memory_domains": [ 00:30:19.661 { 00:30:19.661 "dma_device_id": "system", 00:30:19.661 "dma_device_type": 1 00:30:19.661 }, 00:30:19.661 { 00:30:19.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:19.661 "dma_device_type": 2 00:30:19.661 } 00:30:19.661 ], 00:30:19.661 "driver_specific": {} 00:30:19.661 } 00:30:19.661 ] 00:30:19.661 19:23:35 -- common/autotest_common.sh@893 -- # return 0 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.661 19:23:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:19.918 19:23:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:19.918 "name": "Existed_Raid", 00:30:19.918 "uuid": "ce0b74f6-56e6-478f-af8a-7099e0146842", 00:30:19.918 "strip_size_kb": 64, 00:30:19.918 "state": "configuring", 00:30:19.918 "raid_level": "raid0", 00:30:19.918 "superblock": true, 00:30:19.918 "num_base_bdevs": 2, 00:30:19.918 "num_base_bdevs_discovered": 1, 00:30:19.918 "num_base_bdevs_operational": 2, 00:30:19.918 "base_bdevs_list": [ 00:30:19.918 { 00:30:19.918 "name": "BaseBdev1", 00:30:19.918 "uuid": "0a19a599-c50b-488d-bfb3-90c9bf00344f", 00:30:19.918 "is_configured": true, 00:30:19.918 "data_offset": 2048, 00:30:19.918 "data_size": 63488 00:30:19.918 }, 00:30:19.918 { 00:30:19.918 "name": "BaseBdev2", 00:30:19.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.918 "is_configured": false, 00:30:19.918 "data_offset": 0, 00:30:19.918 "data_size": 0 00:30:19.918 } 00:30:19.918 ] 00:30:19.918 }' 00:30:19.918 19:23:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:19.918 19:23:35 -- common/autotest_common.sh@10 -- # set +x 00:30:20.852 19:23:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:21.111 [2024-04-18 19:23:36.857078] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:21.111 [2024-04-18 19:23:36.857335] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:30:21.111 19:23:36 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:30:21.111 19:23:36 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:21.373 19:23:37 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:21.957 BaseBdev1 00:30:21.957 19:23:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:30:21.957 19:23:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:30:21.957 19:23:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:21.957 19:23:37 -- common/autotest_common.sh@887 -- # local i 00:30:21.957 19:23:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:21.957 19:23:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:21.957 19:23:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:21.957 19:23:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:22.227 [ 00:30:22.227 { 00:30:22.227 "name": "BaseBdev1", 00:30:22.227 "aliases": [ 00:30:22.227 "4edc6820-864c-4b56-91a1-1275a3ac30b1" 00:30:22.227 ], 00:30:22.227 "product_name": "Malloc disk", 00:30:22.227 "block_size": 512, 00:30:22.227 "num_blocks": 65536, 00:30:22.227 "uuid": "4edc6820-864c-4b56-91a1-1275a3ac30b1", 00:30:22.227 "assigned_rate_limits": { 00:30:22.227 "rw_ios_per_sec": 0, 00:30:22.227 "rw_mbytes_per_sec": 0, 00:30:22.227 "r_mbytes_per_sec": 0, 00:30:22.227 "w_mbytes_per_sec": 0 00:30:22.227 }, 00:30:22.227 "claimed": false, 00:30:22.227 "zoned": false, 00:30:22.227 "supported_io_types": { 00:30:22.227 "read": true, 00:30:22.227 "write": true, 00:30:22.227 "unmap": true, 00:30:22.227 "write_zeroes": true, 00:30:22.228 "flush": true, 00:30:22.228 "reset": true, 00:30:22.228 "compare": false, 00:30:22.228 "compare_and_write": false, 00:30:22.228 "abort": true, 00:30:22.228 "nvme_admin": false, 00:30:22.228 "nvme_io": false 00:30:22.228 }, 00:30:22.228 "memory_domains": [ 00:30:22.228 { 00:30:22.228 "dma_device_id": "system", 00:30:22.228 "dma_device_type": 1 00:30:22.228 }, 00:30:22.228 { 00:30:22.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:22.228 "dma_device_type": 2 00:30:22.228 } 00:30:22.228 ], 00:30:22.228 "driver_specific": {} 00:30:22.228 } 00:30:22.228 ] 00:30:22.228 19:23:38 -- common/autotest_common.sh@893 -- # return 0 00:30:22.228 19:23:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:22.500 [2024-04-18 19:23:38.279603] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:22.500 [2024-04-18 19:23:38.282110] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:22.500 [2024-04-18 19:23:38.282339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.500 19:23:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:22.774 19:23:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:22.774 "name": "Existed_Raid", 00:30:22.774 "uuid": "ca021111-aafc-4f23-96cb-021f601de519", 00:30:22.774 "strip_size_kb": 64, 00:30:22.774 "state": "configuring", 00:30:22.774 "raid_level": "raid0", 00:30:22.774 "superblock": true, 00:30:22.774 "num_base_bdevs": 2, 00:30:22.774 "num_base_bdevs_discovered": 1, 00:30:22.774 "num_base_bdevs_operational": 2, 00:30:22.774 "base_bdevs_list": [ 00:30:22.774 { 00:30:22.774 "name": "BaseBdev1", 00:30:22.774 "uuid": "4edc6820-864c-4b56-91a1-1275a3ac30b1", 00:30:22.774 "is_configured": true, 00:30:22.774 "data_offset": 2048, 00:30:22.774 "data_size": 63488 00:30:22.774 }, 00:30:22.774 { 00:30:22.774 "name": "BaseBdev2", 00:30:22.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.774 "is_configured": false, 00:30:22.774 "data_offset": 0, 00:30:22.774 "data_size": 0 00:30:22.774 } 00:30:22.774 ] 00:30:22.774 }' 00:30:22.774 19:23:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:22.774 19:23:38 -- common/autotest_common.sh@10 -- # set +x 00:30:23.362 19:23:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:23.759 [2024-04-18 19:23:39.541254] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:23.759 [2024-04-18 19:23:39.543874] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:30:23.759 [2024-04-18 19:23:39.544006] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:23.759 BaseBdev2 00:30:23.759 [2024-04-18 19:23:39.544207] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:30:23.760 [2024-04-18 19:23:39.544610] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:30:23.760 [2024-04-18 19:23:39.544730] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:30:23.760 [2024-04-18 19:23:39.545011] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:23.760 19:23:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:30:23.760 19:23:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:30:23.760 19:23:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:23.760 19:23:39 -- common/autotest_common.sh@887 -- # local i 00:30:23.760 19:23:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:23.760 19:23:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:23.760 19:23:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:24.077 19:23:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:24.336 [ 00:30:24.336 { 00:30:24.336 "name": "BaseBdev2", 00:30:24.336 "aliases": [ 00:30:24.336 "afeb3017-dc37-44ee-b16d-836e7588d85c" 00:30:24.336 ], 00:30:24.336 "product_name": "Malloc disk", 00:30:24.336 "block_size": 512, 00:30:24.336 "num_blocks": 65536, 00:30:24.336 "uuid": "afeb3017-dc37-44ee-b16d-836e7588d85c", 00:30:24.336 "assigned_rate_limits": { 00:30:24.336 "rw_ios_per_sec": 0, 00:30:24.336 "rw_mbytes_per_sec": 0, 00:30:24.336 "r_mbytes_per_sec": 0, 00:30:24.336 "w_mbytes_per_sec": 0 00:30:24.336 }, 00:30:24.336 "claimed": true, 00:30:24.336 "claim_type": "exclusive_write", 00:30:24.336 "zoned": false, 00:30:24.336 "supported_io_types": { 00:30:24.336 "read": true, 00:30:24.336 "write": true, 00:30:24.336 "unmap": true, 00:30:24.336 "write_zeroes": true, 00:30:24.336 "flush": true, 00:30:24.336 "reset": true, 00:30:24.336 "compare": false, 00:30:24.336 "compare_and_write": false, 00:30:24.336 "abort": true, 00:30:24.336 "nvme_admin": false, 00:30:24.336 "nvme_io": false 00:30:24.336 }, 00:30:24.336 "memory_domains": [ 00:30:24.336 { 00:30:24.336 "dma_device_id": "system", 00:30:24.336 "dma_device_type": 1 00:30:24.336 }, 00:30:24.336 { 00:30:24.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:24.336 "dma_device_type": 2 00:30:24.336 } 00:30:24.336 ], 00:30:24.336 "driver_specific": {} 00:30:24.336 } 00:30:24.336 ] 00:30:24.336 19:23:40 -- common/autotest_common.sh@893 -- # return 0 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.336 19:23:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.594 19:23:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:24.594 "name": "Existed_Raid", 00:30:24.594 "uuid": "ca021111-aafc-4f23-96cb-021f601de519", 00:30:24.594 "strip_size_kb": 64, 00:30:24.594 "state": "online", 00:30:24.594 "raid_level": "raid0", 00:30:24.594 "superblock": true, 00:30:24.594 "num_base_bdevs": 2, 00:30:24.594 "num_base_bdevs_discovered": 2, 00:30:24.594 "num_base_bdevs_operational": 2, 00:30:24.594 "base_bdevs_list": [ 00:30:24.594 { 00:30:24.594 "name": "BaseBdev1", 00:30:24.594 "uuid": "4edc6820-864c-4b56-91a1-1275a3ac30b1", 00:30:24.594 "is_configured": true, 00:30:24.594 "data_offset": 2048, 00:30:24.594 "data_size": 63488 00:30:24.594 }, 00:30:24.594 { 00:30:24.594 "name": "BaseBdev2", 00:30:24.594 "uuid": "afeb3017-dc37-44ee-b16d-836e7588d85c", 00:30:24.594 "is_configured": true, 00:30:24.594 "data_offset": 2048, 00:30:24.594 "data_size": 63488 00:30:24.594 } 00:30:24.594 ] 00:30:24.594 }' 00:30:24.594 19:23:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:24.594 19:23:40 -- common/autotest_common.sh@10 -- # set +x 00:30:25.158 19:23:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:25.416 [2024-04-18 19:23:41.237776] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:25.416 [2024-04-18 19:23:41.238001] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:25.416 [2024-04-18 19:23:41.238152] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:25.675 19:23:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:25.676 19:23:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:25.676 19:23:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.933 19:23:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:25.933 "name": "Existed_Raid", 00:30:25.933 "uuid": "ca021111-aafc-4f23-96cb-021f601de519", 00:30:25.933 "strip_size_kb": 64, 00:30:25.933 "state": "offline", 00:30:25.933 "raid_level": "raid0", 00:30:25.933 "superblock": true, 00:30:25.933 "num_base_bdevs": 2, 00:30:25.933 "num_base_bdevs_discovered": 1, 00:30:25.933 "num_base_bdevs_operational": 1, 00:30:25.933 "base_bdevs_list": [ 00:30:25.933 { 00:30:25.933 "name": null, 00:30:25.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.933 "is_configured": false, 00:30:25.933 "data_offset": 2048, 00:30:25.933 "data_size": 63488 00:30:25.933 }, 00:30:25.933 { 00:30:25.933 "name": "BaseBdev2", 00:30:25.933 "uuid": "afeb3017-dc37-44ee-b16d-836e7588d85c", 00:30:25.934 "is_configured": true, 00:30:25.934 "data_offset": 2048, 00:30:25.934 "data_size": 63488 00:30:25.934 } 00:30:25.934 ] 00:30:25.934 }' 00:30:25.934 19:23:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:25.934 19:23:41 -- common/autotest_common.sh@10 -- # set +x 00:30:26.499 19:23:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:30:26.499 19:23:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:26.499 19:23:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.499 19:23:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:30:26.757 19:23:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:30:26.757 19:23:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:26.757 19:23:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:26.757 [2024-04-18 19:23:42.675803] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:26.757 [2024-04-18 19:23:42.676081] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:30:27.015 19:23:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:30:27.016 19:23:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:27.016 19:23:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.016 19:23:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:30:27.374 19:23:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:30:27.374 19:23:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:30:27.374 19:23:43 -- bdev/bdev_raid.sh@287 -- # killprocess 121241 00:30:27.374 19:23:43 -- common/autotest_common.sh@936 -- # '[' -z 121241 ']' 00:30:27.374 19:23:43 -- common/autotest_common.sh@940 -- # kill -0 121241 00:30:27.374 19:23:43 -- common/autotest_common.sh@941 -- # uname 00:30:27.374 19:23:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:27.374 19:23:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121241 00:30:27.374 killing process with pid 121241 00:30:27.374 19:23:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:27.374 19:23:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:27.374 19:23:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121241' 00:30:27.374 19:23:43 -- common/autotest_common.sh@955 -- # kill 121241 00:30:27.374 19:23:43 -- common/autotest_common.sh@960 -- # wait 121241 00:30:27.374 [2024-04-18 19:23:43.102039] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:27.374 [2024-04-18 19:23:43.102159] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:28.748 ************************************ 00:30:28.748 END TEST raid_state_function_test_sb 00:30:28.748 ************************************ 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:30:28.748 00:30:28.748 real 0m12.595s 00:30:28.748 user 0m21.717s 00:30:28.748 sys 0m1.535s 00:30:28.748 19:23:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:28.748 19:23:44 -- common/autotest_common.sh@10 -- # set +x 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:30:28.748 19:23:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:30:28.748 19:23:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:28.748 19:23:44 -- common/autotest_common.sh@10 -- # set +x 00:30:28.748 ************************************ 00:30:28.748 START TEST raid_superblock_test 00:30:28.748 ************************************ 00:30:28.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:28.748 19:23:44 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 2 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@357 -- # raid_pid=121616 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:30:28.748 19:23:44 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121616 /var/tmp/spdk-raid.sock 00:30:28.748 19:23:44 -- common/autotest_common.sh@817 -- # '[' -z 121616 ']' 00:30:28.748 19:23:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:28.748 19:23:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:28.748 19:23:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:28.748 19:23:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:28.748 19:23:44 -- common/autotest_common.sh@10 -- # set +x 00:30:28.748 [2024-04-18 19:23:44.617440] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:30:28.748 [2024-04-18 19:23:44.617971] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121616 ] 00:30:29.005 [2024-04-18 19:23:44.812156] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.262 [2024-04-18 19:23:45.114103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.520 [2024-04-18 19:23:45.338627] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:29.779 19:23:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:29.779 19:23:45 -- common/autotest_common.sh@850 -- # return 0 00:30:29.779 19:23:45 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:30:29.779 19:23:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:30:29.779 19:23:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:30:29.779 19:23:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:30:29.779 19:23:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:29.779 19:23:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:29.779 19:23:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:30:29.779 19:23:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:29.779 19:23:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:30:30.037 malloc1 00:30:30.037 19:23:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:30.295 [2024-04-18 19:23:46.079320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:30.295 [2024-04-18 19:23:46.079612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:30.295 [2024-04-18 19:23:46.079676] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:30.295 [2024-04-18 19:23:46.079799] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:30.295 [2024-04-18 19:23:46.082303] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:30.295 [2024-04-18 19:23:46.082466] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:30.295 pt1 00:30:30.295 19:23:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:30:30.295 19:23:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:30:30.295 19:23:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:30:30.295 19:23:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:30:30.295 19:23:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:30.295 19:23:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:30.295 19:23:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:30:30.295 19:23:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:30.295 19:23:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:30:30.553 malloc2 00:30:30.553 19:23:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:30.811 [2024-04-18 19:23:46.673999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:30.811 [2024-04-18 19:23:46.674255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:30.811 [2024-04-18 19:23:46.674395] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:30.811 [2024-04-18 19:23:46.674521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:30.811 [2024-04-18 19:23:46.676999] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:30.811 [2024-04-18 19:23:46.677157] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:30.811 pt2 00:30:30.811 19:23:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:30:30.811 19:23:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:30:30.811 19:23:46 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:30:31.070 [2024-04-18 19:23:46.866163] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:31.070 [2024-04-18 19:23:46.868501] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:31.070 [2024-04-18 19:23:46.868864] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:30:31.070 [2024-04-18 19:23:46.868985] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:31.070 [2024-04-18 19:23:46.869156] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:30:31.070 [2024-04-18 19:23:46.869562] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:30:31.070 [2024-04-18 19:23:46.869620] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:30:31.070 [2024-04-18 19:23:46.869836] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.070 19:23:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.328 19:23:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:31.328 "name": "raid_bdev1", 00:30:31.328 "uuid": "4c88a62e-c4f6-4425-9e36-dd8942100789", 00:30:31.328 "strip_size_kb": 64, 00:30:31.328 "state": "online", 00:30:31.328 "raid_level": "raid0", 00:30:31.328 "superblock": true, 00:30:31.328 "num_base_bdevs": 2, 00:30:31.328 "num_base_bdevs_discovered": 2, 00:30:31.328 "num_base_bdevs_operational": 2, 00:30:31.328 "base_bdevs_list": [ 00:30:31.328 { 00:30:31.328 "name": "pt1", 00:30:31.328 "uuid": "2fc51496-d959-5a82-ad9c-de956f4f699c", 00:30:31.328 "is_configured": true, 00:30:31.328 "data_offset": 2048, 00:30:31.328 "data_size": 63488 00:30:31.328 }, 00:30:31.328 { 00:30:31.328 "name": "pt2", 00:30:31.328 "uuid": "f72f439c-adaf-531d-852c-c73daee34669", 00:30:31.328 "is_configured": true, 00:30:31.328 "data_offset": 2048, 00:30:31.328 "data_size": 63488 00:30:31.328 } 00:30:31.328 ] 00:30:31.328 }' 00:30:31.328 19:23:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:31.328 19:23:47 -- common/autotest_common.sh@10 -- # set +x 00:30:32.263 19:23:47 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:30:32.263 19:23:47 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:32.263 [2024-04-18 19:23:48.091199] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:32.263 19:23:48 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=4c88a62e-c4f6-4425-9e36-dd8942100789 00:30:32.263 19:23:48 -- bdev/bdev_raid.sh@380 -- # '[' -z 4c88a62e-c4f6-4425-9e36-dd8942100789 ']' 00:30:32.263 19:23:48 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:32.522 [2024-04-18 19:23:48.366945] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:32.522 [2024-04-18 19:23:48.367169] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:32.522 [2024-04-18 19:23:48.367356] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:32.522 [2024-04-18 19:23:48.367594] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:32.522 [2024-04-18 19:23:48.367696] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:30:32.522 19:23:48 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.522 19:23:48 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:30:32.780 19:23:48 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:30:32.780 19:23:48 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:30:32.780 19:23:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:30:32.780 19:23:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:33.038 19:23:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:30:33.038 19:23:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:33.296 19:23:49 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:30:33.296 19:23:49 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:33.863 19:23:49 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:30:33.863 19:23:49 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:30:33.863 19:23:49 -- common/autotest_common.sh@638 -- # local es=0 00:30:33.863 19:23:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:30:33.863 19:23:49 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:33.863 19:23:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:33.863 19:23:49 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:33.863 19:23:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:33.863 19:23:49 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:33.863 19:23:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:33.863 19:23:49 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:33.863 19:23:49 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:33.863 19:23:49 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:30:33.863 [2024-04-18 19:23:49.759249] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:33.863 [2024-04-18 19:23:49.761751] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:33.863 [2024-04-18 19:23:49.761963] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:30:33.863 [2024-04-18 19:23:49.762138] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:30:33.863 [2024-04-18 19:23:49.762268] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:33.863 [2024-04-18 19:23:49.762343] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:30:33.863 request: 00:30:33.863 { 00:30:33.863 "name": "raid_bdev1", 00:30:33.863 "raid_level": "raid0", 00:30:33.863 "base_bdevs": [ 00:30:33.863 "malloc1", 00:30:33.863 "malloc2" 00:30:33.863 ], 00:30:33.863 "superblock": false, 00:30:33.863 "strip_size_kb": 64, 00:30:33.863 "method": "bdev_raid_create", 00:30:33.863 "req_id": 1 00:30:33.863 } 00:30:33.863 Got JSON-RPC error response 00:30:33.863 response: 00:30:33.863 { 00:30:33.863 "code": -17, 00:30:33.863 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:33.863 } 00:30:33.863 19:23:49 -- common/autotest_common.sh@641 -- # es=1 00:30:33.863 19:23:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:33.863 19:23:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:33.863 19:23:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:33.863 19:23:49 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.863 19:23:49 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:30:34.429 19:23:50 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:30:34.429 19:23:50 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:30:34.429 19:23:50 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:34.429 [2024-04-18 19:23:50.344425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:34.430 [2024-04-18 19:23:50.344760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:34.430 [2024-04-18 19:23:50.344897] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:34.430 [2024-04-18 19:23:50.345047] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:34.430 [2024-04-18 19:23:50.347731] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:34.430 [2024-04-18 19:23:50.347914] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:34.430 [2024-04-18 19:23:50.348123] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:30:34.430 [2024-04-18 19:23:50.348270] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:34.430 pt1 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:34.687 19:23:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.945 19:23:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:34.945 "name": "raid_bdev1", 00:30:34.945 "uuid": "4c88a62e-c4f6-4425-9e36-dd8942100789", 00:30:34.945 "strip_size_kb": 64, 00:30:34.945 "state": "configuring", 00:30:34.945 "raid_level": "raid0", 00:30:34.945 "superblock": true, 00:30:34.945 "num_base_bdevs": 2, 00:30:34.945 "num_base_bdevs_discovered": 1, 00:30:34.945 "num_base_bdevs_operational": 2, 00:30:34.945 "base_bdevs_list": [ 00:30:34.945 { 00:30:34.945 "name": "pt1", 00:30:34.945 "uuid": "2fc51496-d959-5a82-ad9c-de956f4f699c", 00:30:34.945 "is_configured": true, 00:30:34.945 "data_offset": 2048, 00:30:34.945 "data_size": 63488 00:30:34.945 }, 00:30:34.945 { 00:30:34.945 "name": null, 00:30:34.945 "uuid": "f72f439c-adaf-531d-852c-c73daee34669", 00:30:34.945 "is_configured": false, 00:30:34.945 "data_offset": 2048, 00:30:34.945 "data_size": 63488 00:30:34.945 } 00:30:34.945 ] 00:30:34.945 }' 00:30:34.945 19:23:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:34.945 19:23:50 -- common/autotest_common.sh@10 -- # set +x 00:30:35.512 19:23:51 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:30:35.512 19:23:51 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:30:35.512 19:23:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:30:35.512 19:23:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:35.770 [2024-04-18 19:23:51.624911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:35.770 [2024-04-18 19:23:51.626131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.770 [2024-04-18 19:23:51.626732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:35.770 [2024-04-18 19:23:51.627146] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.770 [2024-04-18 19:23:51.628868] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.770 [2024-04-18 19:23:51.629228] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:35.770 [2024-04-18 19:23:51.629723] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:30:35.770 [2024-04-18 19:23:51.629993] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:35.770 [2024-04-18 19:23:51.630493] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:30:35.770 [2024-04-18 19:23:51.630732] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:35.770 pt2 00:30:35.770 [2024-04-18 19:23:51.631310] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:30:35.770 [2024-04-18 19:23:51.632339] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:30:35.770 [2024-04-18 19:23:51.632582] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:30:35.770 [2024-04-18 19:23:51.633130] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.770 19:23:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.029 19:23:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:36.029 "name": "raid_bdev1", 00:30:36.029 "uuid": "4c88a62e-c4f6-4425-9e36-dd8942100789", 00:30:36.029 "strip_size_kb": 64, 00:30:36.029 "state": "online", 00:30:36.029 "raid_level": "raid0", 00:30:36.029 "superblock": true, 00:30:36.029 "num_base_bdevs": 2, 00:30:36.029 "num_base_bdevs_discovered": 2, 00:30:36.029 "num_base_bdevs_operational": 2, 00:30:36.029 "base_bdevs_list": [ 00:30:36.029 { 00:30:36.029 "name": "pt1", 00:30:36.029 "uuid": "2fc51496-d959-5a82-ad9c-de956f4f699c", 00:30:36.029 "is_configured": true, 00:30:36.029 "data_offset": 2048, 00:30:36.029 "data_size": 63488 00:30:36.029 }, 00:30:36.029 { 00:30:36.029 "name": "pt2", 00:30:36.029 "uuid": "f72f439c-adaf-531d-852c-c73daee34669", 00:30:36.029 "is_configured": true, 00:30:36.029 "data_offset": 2048, 00:30:36.029 "data_size": 63488 00:30:36.029 } 00:30:36.029 ] 00:30:36.029 }' 00:30:36.029 19:23:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:36.029 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:30:36.594 19:23:52 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:36.594 19:23:52 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:30:36.853 [2024-04-18 19:23:52.754264] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:36.853 19:23:52 -- bdev/bdev_raid.sh@430 -- # '[' 4c88a62e-c4f6-4425-9e36-dd8942100789 '!=' 4c88a62e-c4f6-4425-9e36-dd8942100789 ']' 00:30:36.853 19:23:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:30:36.853 19:23:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:30:36.853 19:23:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:30:36.853 19:23:52 -- bdev/bdev_raid.sh@511 -- # killprocess 121616 00:30:36.853 19:23:52 -- common/autotest_common.sh@936 -- # '[' -z 121616 ']' 00:30:36.853 19:23:52 -- common/autotest_common.sh@940 -- # kill -0 121616 00:30:37.112 19:23:52 -- common/autotest_common.sh@941 -- # uname 00:30:37.113 19:23:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:37.113 19:23:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121616 00:30:37.113 killing process with pid 121616 00:30:37.113 19:23:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:37.113 19:23:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:37.113 19:23:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121616' 00:30:37.113 19:23:52 -- common/autotest_common.sh@955 -- # kill 121616 00:30:37.113 19:23:52 -- common/autotest_common.sh@960 -- # wait 121616 00:30:37.113 [2024-04-18 19:23:52.803853] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:37.113 [2024-04-18 19:23:52.803952] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:37.113 [2024-04-18 19:23:52.804001] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:37.113 [2024-04-18 19:23:52.804012] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:30:37.113 [2024-04-18 19:23:52.999348] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:38.489 ************************************ 00:30:38.489 END TEST raid_superblock_test 00:30:38.489 ************************************ 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:30:38.489 00:30:38.489 real 0m9.754s 00:30:38.489 user 0m16.542s 00:30:38.489 sys 0m1.183s 00:30:38.489 19:23:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:38.489 19:23:54 -- common/autotest_common.sh@10 -- # set +x 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:30:38.489 19:23:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:30:38.489 19:23:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:38.489 19:23:54 -- common/autotest_common.sh@10 -- # set +x 00:30:38.489 ************************************ 00:30:38.489 START TEST raid_state_function_test 00:30:38.489 ************************************ 00:30:38.489 19:23:54 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 2 false 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=121897 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121897' 00:30:38.489 Process raid pid: 121897 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121897 /var/tmp/spdk-raid.sock 00:30:38.489 19:23:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:38.489 19:23:54 -- common/autotest_common.sh@817 -- # '[' -z 121897 ']' 00:30:38.489 19:23:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:38.489 19:23:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:38.489 19:23:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:38.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:38.489 19:23:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:38.489 19:23:54 -- common/autotest_common.sh@10 -- # set +x 00:30:38.747 [2024-04-18 19:23:54.437187] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:30:38.747 [2024-04-18 19:23:54.437526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.747 [2024-04-18 19:23:54.600546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.005 [2024-04-18 19:23:54.866608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.263 [2024-04-18 19:23:55.068128] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:39.521 19:23:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:39.521 19:23:55 -- common/autotest_common.sh@850 -- # return 0 00:30:39.521 19:23:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:39.780 [2024-04-18 19:23:55.672402] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:39.780 [2024-04-18 19:23:55.672652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:39.780 [2024-04-18 19:23:55.672735] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:39.780 [2024-04-18 19:23:55.672782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.780 19:23:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.039 19:23:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:40.039 "name": "Existed_Raid", 00:30:40.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.039 "strip_size_kb": 64, 00:30:40.039 "state": "configuring", 00:30:40.039 "raid_level": "concat", 00:30:40.039 "superblock": false, 00:30:40.039 "num_base_bdevs": 2, 00:30:40.039 "num_base_bdevs_discovered": 0, 00:30:40.039 "num_base_bdevs_operational": 2, 00:30:40.039 "base_bdevs_list": [ 00:30:40.039 { 00:30:40.039 "name": "BaseBdev1", 00:30:40.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.039 "is_configured": false, 00:30:40.039 "data_offset": 0, 00:30:40.039 "data_size": 0 00:30:40.039 }, 00:30:40.039 { 00:30:40.039 "name": "BaseBdev2", 00:30:40.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.039 "is_configured": false, 00:30:40.039 "data_offset": 0, 00:30:40.039 "data_size": 0 00:30:40.039 } 00:30:40.039 ] 00:30:40.039 }' 00:30:40.039 19:23:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:40.039 19:23:55 -- common/autotest_common.sh@10 -- # set +x 00:30:40.996 19:23:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:40.996 [2024-04-18 19:23:56.888546] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:40.996 [2024-04-18 19:23:56.888755] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:30:40.996 19:23:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:41.255 [2024-04-18 19:23:57.144620] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:41.255 [2024-04-18 19:23:57.144879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:41.255 [2024-04-18 19:23:57.144963] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:41.255 [2024-04-18 19:23:57.145056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:41.255 19:23:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:41.512 [2024-04-18 19:23:57.436539] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:41.512 BaseBdev1 00:30:41.769 19:23:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:30:41.769 19:23:57 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:30:41.769 19:23:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:41.769 19:23:57 -- common/autotest_common.sh@887 -- # local i 00:30:41.769 19:23:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:41.769 19:23:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:41.769 19:23:57 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:41.769 19:23:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:42.027 [ 00:30:42.027 { 00:30:42.027 "name": "BaseBdev1", 00:30:42.027 "aliases": [ 00:30:42.027 "395f2921-5116-491b-8a8f-19d1c9af0b52" 00:30:42.027 ], 00:30:42.027 "product_name": "Malloc disk", 00:30:42.027 "block_size": 512, 00:30:42.027 "num_blocks": 65536, 00:30:42.027 "uuid": "395f2921-5116-491b-8a8f-19d1c9af0b52", 00:30:42.027 "assigned_rate_limits": { 00:30:42.027 "rw_ios_per_sec": 0, 00:30:42.027 "rw_mbytes_per_sec": 0, 00:30:42.027 "r_mbytes_per_sec": 0, 00:30:42.027 "w_mbytes_per_sec": 0 00:30:42.027 }, 00:30:42.027 "claimed": true, 00:30:42.027 "claim_type": "exclusive_write", 00:30:42.027 "zoned": false, 00:30:42.027 "supported_io_types": { 00:30:42.027 "read": true, 00:30:42.027 "write": true, 00:30:42.027 "unmap": true, 00:30:42.027 "write_zeroes": true, 00:30:42.027 "flush": true, 00:30:42.027 "reset": true, 00:30:42.027 "compare": false, 00:30:42.027 "compare_and_write": false, 00:30:42.027 "abort": true, 00:30:42.027 "nvme_admin": false, 00:30:42.027 "nvme_io": false 00:30:42.027 }, 00:30:42.027 "memory_domains": [ 00:30:42.027 { 00:30:42.027 "dma_device_id": "system", 00:30:42.027 "dma_device_type": 1 00:30:42.027 }, 00:30:42.027 { 00:30:42.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.027 "dma_device_type": 2 00:30:42.027 } 00:30:42.027 ], 00:30:42.027 "driver_specific": {} 00:30:42.027 } 00:30:42.027 ] 00:30:42.027 19:23:57 -- common/autotest_common.sh@893 -- # return 0 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.027 19:23:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.284 19:23:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:42.284 "name": "Existed_Raid", 00:30:42.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.284 "strip_size_kb": 64, 00:30:42.284 "state": "configuring", 00:30:42.284 "raid_level": "concat", 00:30:42.284 "superblock": false, 00:30:42.284 "num_base_bdevs": 2, 00:30:42.284 "num_base_bdevs_discovered": 1, 00:30:42.284 "num_base_bdevs_operational": 2, 00:30:42.284 "base_bdevs_list": [ 00:30:42.284 { 00:30:42.284 "name": "BaseBdev1", 00:30:42.284 "uuid": "395f2921-5116-491b-8a8f-19d1c9af0b52", 00:30:42.284 "is_configured": true, 00:30:42.284 "data_offset": 0, 00:30:42.284 "data_size": 65536 00:30:42.284 }, 00:30:42.284 { 00:30:42.284 "name": "BaseBdev2", 00:30:42.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.284 "is_configured": false, 00:30:42.284 "data_offset": 0, 00:30:42.284 "data_size": 0 00:30:42.284 } 00:30:42.284 ] 00:30:42.284 }' 00:30:42.284 19:23:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:42.284 19:23:58 -- common/autotest_common.sh@10 -- # set +x 00:30:42.849 19:23:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:43.108 [2024-04-18 19:23:59.016930] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:43.108 [2024-04-18 19:23:59.017151] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:30:43.366 19:23:59 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:30:43.366 19:23:59 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:43.366 [2024-04-18 19:23:59.293006] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:43.624 [2024-04-18 19:23:59.295281] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:43.624 [2024-04-18 19:23:59.295482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:43.624 "name": "Existed_Raid", 00:30:43.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.624 "strip_size_kb": 64, 00:30:43.624 "state": "configuring", 00:30:43.624 "raid_level": "concat", 00:30:43.624 "superblock": false, 00:30:43.624 "num_base_bdevs": 2, 00:30:43.624 "num_base_bdevs_discovered": 1, 00:30:43.624 "num_base_bdevs_operational": 2, 00:30:43.624 "base_bdevs_list": [ 00:30:43.624 { 00:30:43.624 "name": "BaseBdev1", 00:30:43.624 "uuid": "395f2921-5116-491b-8a8f-19d1c9af0b52", 00:30:43.624 "is_configured": true, 00:30:43.624 "data_offset": 0, 00:30:43.624 "data_size": 65536 00:30:43.624 }, 00:30:43.624 { 00:30:43.624 "name": "BaseBdev2", 00:30:43.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.624 "is_configured": false, 00:30:43.624 "data_offset": 0, 00:30:43.624 "data_size": 0 00:30:43.624 } 00:30:43.624 ] 00:30:43.624 }' 00:30:43.624 19:23:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:43.624 19:23:59 -- common/autotest_common.sh@10 -- # set +x 00:30:44.582 19:24:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:44.840 [2024-04-18 19:24:00.578464] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:44.840 [2024-04-18 19:24:00.578703] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:30:44.840 [2024-04-18 19:24:00.578753] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:44.840 [2024-04-18 19:24:00.579027] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:30:44.840 [2024-04-18 19:24:00.579492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:30:44.840 [2024-04-18 19:24:00.579604] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:30:44.840 [2024-04-18 19:24:00.579940] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:44.840 BaseBdev2 00:30:44.840 19:24:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:30:44.840 19:24:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:30:44.840 19:24:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:44.840 19:24:00 -- common/autotest_common.sh@887 -- # local i 00:30:44.840 19:24:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:44.840 19:24:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:44.840 19:24:00 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:45.099 19:24:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:45.099 [ 00:30:45.099 { 00:30:45.099 "name": "BaseBdev2", 00:30:45.099 "aliases": [ 00:30:45.099 "1867a442-0089-4b1c-9c27-fd2551d9d4d5" 00:30:45.099 ], 00:30:45.099 "product_name": "Malloc disk", 00:30:45.099 "block_size": 512, 00:30:45.099 "num_blocks": 65536, 00:30:45.099 "uuid": "1867a442-0089-4b1c-9c27-fd2551d9d4d5", 00:30:45.099 "assigned_rate_limits": { 00:30:45.099 "rw_ios_per_sec": 0, 00:30:45.099 "rw_mbytes_per_sec": 0, 00:30:45.099 "r_mbytes_per_sec": 0, 00:30:45.099 "w_mbytes_per_sec": 0 00:30:45.099 }, 00:30:45.099 "claimed": true, 00:30:45.099 "claim_type": "exclusive_write", 00:30:45.099 "zoned": false, 00:30:45.099 "supported_io_types": { 00:30:45.099 "read": true, 00:30:45.099 "write": true, 00:30:45.099 "unmap": true, 00:30:45.099 "write_zeroes": true, 00:30:45.099 "flush": true, 00:30:45.099 "reset": true, 00:30:45.099 "compare": false, 00:30:45.099 "compare_and_write": false, 00:30:45.099 "abort": true, 00:30:45.099 "nvme_admin": false, 00:30:45.099 "nvme_io": false 00:30:45.099 }, 00:30:45.099 "memory_domains": [ 00:30:45.099 { 00:30:45.099 "dma_device_id": "system", 00:30:45.099 "dma_device_type": 1 00:30:45.099 }, 00:30:45.099 { 00:30:45.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.099 "dma_device_type": 2 00:30:45.099 } 00:30:45.099 ], 00:30:45.099 "driver_specific": {} 00:30:45.099 } 00:30:45.099 ] 00:30:45.099 19:24:01 -- common/autotest_common.sh@893 -- # return 0 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:45.099 19:24:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:45.356 19:24:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.356 19:24:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:45.356 19:24:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:45.356 "name": "Existed_Raid", 00:30:45.356 "uuid": "ff022b4b-6e90-4dca-9d09-1a031ec11447", 00:30:45.356 "strip_size_kb": 64, 00:30:45.356 "state": "online", 00:30:45.356 "raid_level": "concat", 00:30:45.356 "superblock": false, 00:30:45.356 "num_base_bdevs": 2, 00:30:45.356 "num_base_bdevs_discovered": 2, 00:30:45.356 "num_base_bdevs_operational": 2, 00:30:45.356 "base_bdevs_list": [ 00:30:45.356 { 00:30:45.356 "name": "BaseBdev1", 00:30:45.356 "uuid": "395f2921-5116-491b-8a8f-19d1c9af0b52", 00:30:45.356 "is_configured": true, 00:30:45.356 "data_offset": 0, 00:30:45.356 "data_size": 65536 00:30:45.356 }, 00:30:45.356 { 00:30:45.356 "name": "BaseBdev2", 00:30:45.356 "uuid": "1867a442-0089-4b1c-9c27-fd2551d9d4d5", 00:30:45.356 "is_configured": true, 00:30:45.356 "data_offset": 0, 00:30:45.356 "data_size": 65536 00:30:45.356 } 00:30:45.356 ] 00:30:45.356 }' 00:30:45.356 19:24:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:45.356 19:24:01 -- common/autotest_common.sh@10 -- # set +x 00:30:46.290 19:24:01 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:46.290 [2024-04-18 19:24:02.130943] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:46.290 [2024-04-18 19:24:02.132898] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:46.290 [2024-04-18 19:24:02.133144] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:46.549 19:24:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:46.550 19:24:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:46.550 19:24:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.550 19:24:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:46.808 19:24:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:46.808 "name": "Existed_Raid", 00:30:46.808 "uuid": "ff022b4b-6e90-4dca-9d09-1a031ec11447", 00:30:46.808 "strip_size_kb": 64, 00:30:46.808 "state": "offline", 00:30:46.808 "raid_level": "concat", 00:30:46.808 "superblock": false, 00:30:46.808 "num_base_bdevs": 2, 00:30:46.808 "num_base_bdevs_discovered": 1, 00:30:46.808 "num_base_bdevs_operational": 1, 00:30:46.808 "base_bdevs_list": [ 00:30:46.808 { 00:30:46.808 "name": null, 00:30:46.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.808 "is_configured": false, 00:30:46.808 "data_offset": 0, 00:30:46.808 "data_size": 65536 00:30:46.808 }, 00:30:46.808 { 00:30:46.808 "name": "BaseBdev2", 00:30:46.808 "uuid": "1867a442-0089-4b1c-9c27-fd2551d9d4d5", 00:30:46.809 "is_configured": true, 00:30:46.809 "data_offset": 0, 00:30:46.809 "data_size": 65536 00:30:46.809 } 00:30:46.809 ] 00:30:46.809 }' 00:30:46.809 19:24:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:46.809 19:24:02 -- common/autotest_common.sh@10 -- # set +x 00:30:47.399 19:24:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:30:47.399 19:24:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:47.399 19:24:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.399 19:24:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:30:47.660 19:24:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:30:47.660 19:24:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:47.660 19:24:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:47.917 [2024-04-18 19:24:03.840666] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:47.917 [2024-04-18 19:24:03.840931] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:30:48.175 19:24:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:30:48.175 19:24:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:30:48.175 19:24:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.175 19:24:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:30:48.433 19:24:04 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:30:48.433 19:24:04 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:30:48.433 19:24:04 -- bdev/bdev_raid.sh@287 -- # killprocess 121897 00:30:48.433 19:24:04 -- common/autotest_common.sh@936 -- # '[' -z 121897 ']' 00:30:48.433 19:24:04 -- common/autotest_common.sh@940 -- # kill -0 121897 00:30:48.433 19:24:04 -- common/autotest_common.sh@941 -- # uname 00:30:48.433 19:24:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:48.433 19:24:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121897 00:30:48.433 killing process with pid 121897 00:30:48.433 19:24:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:48.433 19:24:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:48.433 19:24:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121897' 00:30:48.433 19:24:04 -- common/autotest_common.sh@955 -- # kill 121897 00:30:48.433 19:24:04 -- common/autotest_common.sh@960 -- # wait 121897 00:30:48.433 [2024-04-18 19:24:04.258927] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:48.433 [2024-04-18 19:24:04.259049] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:49.809 ************************************ 00:30:49.809 END TEST raid_state_function_test 00:30:49.809 ************************************ 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@289 -- # return 0 00:30:49.809 00:30:49.809 real 0m11.217s 00:30:49.809 user 0m19.278s 00:30:49.809 sys 0m1.387s 00:30:49.809 19:24:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:49.809 19:24:05 -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:30:49.809 19:24:05 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:30:49.809 19:24:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:49.809 19:24:05 -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 ************************************ 00:30:49.809 START TEST raid_state_function_test_sb 00:30:49.809 ************************************ 00:30:49.809 19:24:05 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 2 true 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:49.809 19:24:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@226 -- # raid_pid=122253 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:49.810 Process raid pid: 122253 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122253' 00:30:49.810 19:24:05 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122253 /var/tmp/spdk-raid.sock 00:30:49.810 19:24:05 -- common/autotest_common.sh@817 -- # '[' -z 122253 ']' 00:30:49.810 19:24:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:49.810 19:24:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:49.810 19:24:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:49.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:49.810 19:24:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:49.810 19:24:05 -- common/autotest_common.sh@10 -- # set +x 00:30:50.069 [2024-04-18 19:24:05.756526] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:30:50.069 [2024-04-18 19:24:05.756799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.069 [2024-04-18 19:24:05.916745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.364 [2024-04-18 19:24:06.120429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.625 [2024-04-18 19:24:06.339156] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:50.884 19:24:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:50.884 19:24:06 -- common/autotest_common.sh@850 -- # return 0 00:30:50.884 19:24:06 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:51.142 [2024-04-18 19:24:06.983322] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:51.142 [2024-04-18 19:24:06.983555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:51.142 [2024-04-18 19:24:06.983657] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:51.142 [2024-04-18 19:24:06.983709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.142 19:24:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:51.401 19:24:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:51.401 "name": "Existed_Raid", 00:30:51.401 "uuid": "63b21c89-55ac-4c0e-bccf-53bb96286725", 00:30:51.401 "strip_size_kb": 64, 00:30:51.401 "state": "configuring", 00:30:51.401 "raid_level": "concat", 00:30:51.401 "superblock": true, 00:30:51.401 "num_base_bdevs": 2, 00:30:51.401 "num_base_bdevs_discovered": 0, 00:30:51.401 "num_base_bdevs_operational": 2, 00:30:51.401 "base_bdevs_list": [ 00:30:51.401 { 00:30:51.401 "name": "BaseBdev1", 00:30:51.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.401 "is_configured": false, 00:30:51.401 "data_offset": 0, 00:30:51.401 "data_size": 0 00:30:51.401 }, 00:30:51.401 { 00:30:51.401 "name": "BaseBdev2", 00:30:51.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.401 "is_configured": false, 00:30:51.401 "data_offset": 0, 00:30:51.401 "data_size": 0 00:30:51.401 } 00:30:51.401 ] 00:30:51.401 }' 00:30:51.401 19:24:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:51.401 19:24:07 -- common/autotest_common.sh@10 -- # set +x 00:30:52.336 19:24:07 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:52.336 [2024-04-18 19:24:08.251491] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:52.336 [2024-04-18 19:24:08.251662] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:30:52.593 19:24:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:52.593 [2024-04-18 19:24:08.515599] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:52.593 [2024-04-18 19:24:08.515869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:52.594 [2024-04-18 19:24:08.515960] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:52.594 [2024-04-18 19:24:08.516016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:52.851 19:24:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:53.109 [2024-04-18 19:24:08.837573] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:53.109 BaseBdev1 00:30:53.109 19:24:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:30:53.109 19:24:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:30:53.109 19:24:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:53.109 19:24:08 -- common/autotest_common.sh@887 -- # local i 00:30:53.109 19:24:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:53.109 19:24:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:53.109 19:24:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:53.383 19:24:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:53.641 [ 00:30:53.641 { 00:30:53.641 "name": "BaseBdev1", 00:30:53.641 "aliases": [ 00:30:53.641 "e76dd476-1843-4452-9c40-545940ce916d" 00:30:53.641 ], 00:30:53.641 "product_name": "Malloc disk", 00:30:53.641 "block_size": 512, 00:30:53.641 "num_blocks": 65536, 00:30:53.641 "uuid": "e76dd476-1843-4452-9c40-545940ce916d", 00:30:53.641 "assigned_rate_limits": { 00:30:53.641 "rw_ios_per_sec": 0, 00:30:53.641 "rw_mbytes_per_sec": 0, 00:30:53.641 "r_mbytes_per_sec": 0, 00:30:53.641 "w_mbytes_per_sec": 0 00:30:53.641 }, 00:30:53.641 "claimed": true, 00:30:53.641 "claim_type": "exclusive_write", 00:30:53.641 "zoned": false, 00:30:53.641 "supported_io_types": { 00:30:53.641 "read": true, 00:30:53.641 "write": true, 00:30:53.641 "unmap": true, 00:30:53.641 "write_zeroes": true, 00:30:53.641 "flush": true, 00:30:53.641 "reset": true, 00:30:53.641 "compare": false, 00:30:53.641 "compare_and_write": false, 00:30:53.641 "abort": true, 00:30:53.641 "nvme_admin": false, 00:30:53.641 "nvme_io": false 00:30:53.641 }, 00:30:53.641 "memory_domains": [ 00:30:53.641 { 00:30:53.641 "dma_device_id": "system", 00:30:53.641 "dma_device_type": 1 00:30:53.641 }, 00:30:53.641 { 00:30:53.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:53.641 "dma_device_type": 2 00:30:53.641 } 00:30:53.641 ], 00:30:53.641 "driver_specific": {} 00:30:53.641 } 00:30:53.641 ] 00:30:53.641 19:24:09 -- common/autotest_common.sh@893 -- # return 0 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.641 19:24:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:53.898 19:24:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:53.898 "name": "Existed_Raid", 00:30:53.898 "uuid": "2a875891-288e-40e3-8268-294c331a3cfa", 00:30:53.898 "strip_size_kb": 64, 00:30:53.898 "state": "configuring", 00:30:53.898 "raid_level": "concat", 00:30:53.898 "superblock": true, 00:30:53.898 "num_base_bdevs": 2, 00:30:53.898 "num_base_bdevs_discovered": 1, 00:30:53.898 "num_base_bdevs_operational": 2, 00:30:53.898 "base_bdevs_list": [ 00:30:53.898 { 00:30:53.898 "name": "BaseBdev1", 00:30:53.898 "uuid": "e76dd476-1843-4452-9c40-545940ce916d", 00:30:53.898 "is_configured": true, 00:30:53.898 "data_offset": 2048, 00:30:53.898 "data_size": 63488 00:30:53.898 }, 00:30:53.898 { 00:30:53.898 "name": "BaseBdev2", 00:30:53.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.898 "is_configured": false, 00:30:53.898 "data_offset": 0, 00:30:53.898 "data_size": 0 00:30:53.898 } 00:30:53.898 ] 00:30:53.898 }' 00:30:53.898 19:24:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:53.898 19:24:09 -- common/autotest_common.sh@10 -- # set +x 00:30:54.832 19:24:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:54.832 [2024-04-18 19:24:10.718040] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:54.832 [2024-04-18 19:24:10.718318] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:30:54.832 19:24:10 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:30:54.832 19:24:10 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:55.398 19:24:11 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:55.656 BaseBdev1 00:30:55.656 19:24:11 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:30:55.656 19:24:11 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:30:55.656 19:24:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:55.656 19:24:11 -- common/autotest_common.sh@887 -- # local i 00:30:55.656 19:24:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:55.656 19:24:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:55.656 19:24:11 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:55.914 19:24:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:56.172 [ 00:30:56.172 { 00:30:56.172 "name": "BaseBdev1", 00:30:56.172 "aliases": [ 00:30:56.172 "9fafcc81-12ee-4412-8a14-7e86d47117b3" 00:30:56.172 ], 00:30:56.172 "product_name": "Malloc disk", 00:30:56.172 "block_size": 512, 00:30:56.172 "num_blocks": 65536, 00:30:56.172 "uuid": "9fafcc81-12ee-4412-8a14-7e86d47117b3", 00:30:56.172 "assigned_rate_limits": { 00:30:56.172 "rw_ios_per_sec": 0, 00:30:56.172 "rw_mbytes_per_sec": 0, 00:30:56.172 "r_mbytes_per_sec": 0, 00:30:56.172 "w_mbytes_per_sec": 0 00:30:56.172 }, 00:30:56.172 "claimed": false, 00:30:56.172 "zoned": false, 00:30:56.172 "supported_io_types": { 00:30:56.172 "read": true, 00:30:56.172 "write": true, 00:30:56.172 "unmap": true, 00:30:56.172 "write_zeroes": true, 00:30:56.172 "flush": true, 00:30:56.172 "reset": true, 00:30:56.172 "compare": false, 00:30:56.172 "compare_and_write": false, 00:30:56.172 "abort": true, 00:30:56.172 "nvme_admin": false, 00:30:56.172 "nvme_io": false 00:30:56.172 }, 00:30:56.172 "memory_domains": [ 00:30:56.172 { 00:30:56.172 "dma_device_id": "system", 00:30:56.172 "dma_device_type": 1 00:30:56.172 }, 00:30:56.172 { 00:30:56.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:56.172 "dma_device_type": 2 00:30:56.172 } 00:30:56.172 ], 00:30:56.172 "driver_specific": {} 00:30:56.172 } 00:30:56.172 ] 00:30:56.172 19:24:12 -- common/autotest_common.sh@893 -- # return 0 00:30:56.172 19:24:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:56.431 [2024-04-18 19:24:12.260542] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:56.431 [2024-04-18 19:24:12.262871] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:56.431 [2024-04-18 19:24:12.263051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.431 19:24:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:56.691 19:24:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:56.691 "name": "Existed_Raid", 00:30:56.691 "uuid": "831475a6-0e9a-4b14-bf1e-dc0d8ae3cefa", 00:30:56.691 "strip_size_kb": 64, 00:30:56.691 "state": "configuring", 00:30:56.691 "raid_level": "concat", 00:30:56.691 "superblock": true, 00:30:56.691 "num_base_bdevs": 2, 00:30:56.691 "num_base_bdevs_discovered": 1, 00:30:56.691 "num_base_bdevs_operational": 2, 00:30:56.691 "base_bdevs_list": [ 00:30:56.691 { 00:30:56.691 "name": "BaseBdev1", 00:30:56.691 "uuid": "9fafcc81-12ee-4412-8a14-7e86d47117b3", 00:30:56.691 "is_configured": true, 00:30:56.691 "data_offset": 2048, 00:30:56.691 "data_size": 63488 00:30:56.691 }, 00:30:56.691 { 00:30:56.691 "name": "BaseBdev2", 00:30:56.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.691 "is_configured": false, 00:30:56.691 "data_offset": 0, 00:30:56.691 "data_size": 0 00:30:56.691 } 00:30:56.691 ] 00:30:56.691 }' 00:30:56.691 19:24:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:56.691 19:24:12 -- common/autotest_common.sh@10 -- # set +x 00:30:57.625 19:24:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:57.883 [2024-04-18 19:24:13.610922] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:57.883 [2024-04-18 19:24:13.611395] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:30:57.883 [2024-04-18 19:24:13.611526] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:57.883 [2024-04-18 19:24:13.611727] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:30:57.883 BaseBdev2 00:30:57.883 [2024-04-18 19:24:13.612116] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:30:57.883 [2024-04-18 19:24:13.612130] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:30:57.883 [2024-04-18 19:24:13.612297] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:57.883 19:24:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:30:57.883 19:24:13 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:30:57.883 19:24:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:30:57.883 19:24:13 -- common/autotest_common.sh@887 -- # local i 00:30:57.883 19:24:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:30:57.883 19:24:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:30:57.883 19:24:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:58.141 19:24:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:58.400 [ 00:30:58.400 { 00:30:58.400 "name": "BaseBdev2", 00:30:58.400 "aliases": [ 00:30:58.400 "ece3151b-e131-42ca-ad0c-f4ecbc7bc94b" 00:30:58.400 ], 00:30:58.400 "product_name": "Malloc disk", 00:30:58.400 "block_size": 512, 00:30:58.400 "num_blocks": 65536, 00:30:58.400 "uuid": "ece3151b-e131-42ca-ad0c-f4ecbc7bc94b", 00:30:58.400 "assigned_rate_limits": { 00:30:58.400 "rw_ios_per_sec": 0, 00:30:58.400 "rw_mbytes_per_sec": 0, 00:30:58.400 "r_mbytes_per_sec": 0, 00:30:58.400 "w_mbytes_per_sec": 0 00:30:58.400 }, 00:30:58.400 "claimed": true, 00:30:58.400 "claim_type": "exclusive_write", 00:30:58.400 "zoned": false, 00:30:58.400 "supported_io_types": { 00:30:58.400 "read": true, 00:30:58.400 "write": true, 00:30:58.400 "unmap": true, 00:30:58.400 "write_zeroes": true, 00:30:58.400 "flush": true, 00:30:58.400 "reset": true, 00:30:58.400 "compare": false, 00:30:58.400 "compare_and_write": false, 00:30:58.400 "abort": true, 00:30:58.400 "nvme_admin": false, 00:30:58.400 "nvme_io": false 00:30:58.400 }, 00:30:58.400 "memory_domains": [ 00:30:58.400 { 00:30:58.400 "dma_device_id": "system", 00:30:58.400 "dma_device_type": 1 00:30:58.400 }, 00:30:58.400 { 00:30:58.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:58.400 "dma_device_type": 2 00:30:58.400 } 00:30:58.400 ], 00:30:58.400 "driver_specific": {} 00:30:58.400 } 00:30:58.400 ] 00:30:58.400 19:24:14 -- common/autotest_common.sh@893 -- # return 0 00:30:58.400 19:24:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.401 19:24:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:58.658 19:24:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:30:58.658 "name": "Existed_Raid", 00:30:58.658 "uuid": "831475a6-0e9a-4b14-bf1e-dc0d8ae3cefa", 00:30:58.658 "strip_size_kb": 64, 00:30:58.658 "state": "online", 00:30:58.658 "raid_level": "concat", 00:30:58.658 "superblock": true, 00:30:58.658 "num_base_bdevs": 2, 00:30:58.658 "num_base_bdevs_discovered": 2, 00:30:58.658 "num_base_bdevs_operational": 2, 00:30:58.658 "base_bdevs_list": [ 00:30:58.659 { 00:30:58.659 "name": "BaseBdev1", 00:30:58.659 "uuid": "9fafcc81-12ee-4412-8a14-7e86d47117b3", 00:30:58.659 "is_configured": true, 00:30:58.659 "data_offset": 2048, 00:30:58.659 "data_size": 63488 00:30:58.659 }, 00:30:58.659 { 00:30:58.659 "name": "BaseBdev2", 00:30:58.659 "uuid": "ece3151b-e131-42ca-ad0c-f4ecbc7bc94b", 00:30:58.659 "is_configured": true, 00:30:58.659 "data_offset": 2048, 00:30:58.659 "data_size": 63488 00:30:58.659 } 00:30:58.659 ] 00:30:58.659 }' 00:30:58.659 19:24:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:30:58.659 19:24:14 -- common/autotest_common.sh@10 -- # set +x 00:30:59.592 19:24:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:59.592 [2024-04-18 19:24:15.499565] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:59.592 [2024-04-18 19:24:15.499799] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:59.592 [2024-04-18 19:24:15.499943] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.851 19:24:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:00.108 19:24:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:00.108 "name": "Existed_Raid", 00:31:00.108 "uuid": "831475a6-0e9a-4b14-bf1e-dc0d8ae3cefa", 00:31:00.108 "strip_size_kb": 64, 00:31:00.108 "state": "offline", 00:31:00.108 "raid_level": "concat", 00:31:00.108 "superblock": true, 00:31:00.108 "num_base_bdevs": 2, 00:31:00.108 "num_base_bdevs_discovered": 1, 00:31:00.108 "num_base_bdevs_operational": 1, 00:31:00.108 "base_bdevs_list": [ 00:31:00.108 { 00:31:00.108 "name": null, 00:31:00.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.108 "is_configured": false, 00:31:00.108 "data_offset": 2048, 00:31:00.108 "data_size": 63488 00:31:00.108 }, 00:31:00.108 { 00:31:00.108 "name": "BaseBdev2", 00:31:00.108 "uuid": "ece3151b-e131-42ca-ad0c-f4ecbc7bc94b", 00:31:00.108 "is_configured": true, 00:31:00.108 "data_offset": 2048, 00:31:00.108 "data_size": 63488 00:31:00.108 } 00:31:00.108 ] 00:31:00.108 }' 00:31:00.108 19:24:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:00.108 19:24:15 -- common/autotest_common.sh@10 -- # set +x 00:31:01.043 19:24:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:31:01.043 19:24:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:31:01.043 19:24:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.043 19:24:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:31:01.321 19:24:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:31:01.321 19:24:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:01.321 19:24:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:01.579 [2024-04-18 19:24:17.289122] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:01.579 [2024-04-18 19:24:17.289406] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:31:01.579 19:24:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:31:01.579 19:24:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:31:01.579 19:24:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.579 19:24:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:31:01.836 19:24:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:31:01.836 19:24:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:31:01.836 19:24:17 -- bdev/bdev_raid.sh@287 -- # killprocess 122253 00:31:01.836 19:24:17 -- common/autotest_common.sh@936 -- # '[' -z 122253 ']' 00:31:01.837 19:24:17 -- common/autotest_common.sh@940 -- # kill -0 122253 00:31:01.837 19:24:17 -- common/autotest_common.sh@941 -- # uname 00:31:01.837 19:24:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:01.837 19:24:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122253 00:31:01.837 killing process with pid 122253 00:31:01.837 19:24:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:01.837 19:24:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:01.837 19:24:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122253' 00:31:01.837 19:24:17 -- common/autotest_common.sh@955 -- # kill 122253 00:31:01.837 19:24:17 -- common/autotest_common.sh@960 -- # wait 122253 00:31:01.837 [2024-04-18 19:24:17.705723] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:01.837 [2024-04-18 19:24:17.705848] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:03.212 ************************************ 00:31:03.212 END TEST raid_state_function_test_sb 00:31:03.212 ************************************ 00:31:03.212 19:24:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:31:03.212 00:31:03.212 real 0m13.443s 00:31:03.212 user 0m23.214s 00:31:03.212 sys 0m1.699s 00:31:03.212 19:24:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:03.212 19:24:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:31:03.471 19:24:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:31:03.471 19:24:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:03.471 19:24:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.471 ************************************ 00:31:03.471 START TEST raid_superblock_test 00:31:03.471 ************************************ 00:31:03.471 19:24:19 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 2 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@357 -- # raid_pid=122634 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:03.471 19:24:19 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122634 /var/tmp/spdk-raid.sock 00:31:03.471 19:24:19 -- common/autotest_common.sh@817 -- # '[' -z 122634 ']' 00:31:03.471 19:24:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:03.471 19:24:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:03.471 19:24:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:03.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:03.471 19:24:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:03.471 19:24:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.471 [2024-04-18 19:24:19.300891] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:31:03.471 [2024-04-18 19:24:19.301361] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122634 ] 00:31:03.729 [2024-04-18 19:24:19.480505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.988 [2024-04-18 19:24:19.750089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.247 [2024-04-18 19:24:19.994005] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:04.506 19:24:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:04.506 19:24:20 -- common/autotest_common.sh@850 -- # return 0 00:31:04.506 19:24:20 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:31:04.506 19:24:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:31:04.506 19:24:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:31:04.506 19:24:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:31:04.506 19:24:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:04.506 19:24:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:04.506 19:24:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:31:04.506 19:24:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:04.506 19:24:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:31:04.764 malloc1 00:31:04.764 19:24:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:05.022 [2024-04-18 19:24:20.838057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:05.022 [2024-04-18 19:24:20.838349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.022 [2024-04-18 19:24:20.838418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:05.022 [2024-04-18 19:24:20.838706] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.022 [2024-04-18 19:24:20.841356] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.022 [2024-04-18 19:24:20.841578] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:05.022 pt1 00:31:05.022 19:24:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:31:05.022 19:24:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:31:05.022 19:24:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:31:05.022 19:24:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:31:05.022 19:24:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:05.022 19:24:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:05.022 19:24:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:31:05.022 19:24:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:05.022 19:24:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:31:05.629 malloc2 00:31:05.629 19:24:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:05.629 [2024-04-18 19:24:21.526316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:05.629 [2024-04-18 19:24:21.526626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.629 [2024-04-18 19:24:21.526727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:05.629 [2024-04-18 19:24:21.526893] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.629 [2024-04-18 19:24:21.529521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.629 [2024-04-18 19:24:21.529709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:05.629 pt2 00:31:05.629 19:24:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:31:05.629 19:24:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:31:05.629 19:24:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:31:05.887 [2024-04-18 19:24:21.814537] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:06.145 [2024-04-18 19:24:21.816871] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:06.145 [2024-04-18 19:24:21.817216] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:31:06.145 [2024-04-18 19:24:21.817347] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:06.145 [2024-04-18 19:24:21.817530] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:31:06.145 [2024-04-18 19:24:21.817934] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:31:06.145 [2024-04-18 19:24:21.817978] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:31:06.145 [2024-04-18 19:24:21.818264] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.145 19:24:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.403 19:24:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:06.403 "name": "raid_bdev1", 00:31:06.403 "uuid": "95c13d4d-ce3c-4ff9-a912-299ce40290e6", 00:31:06.403 "strip_size_kb": 64, 00:31:06.403 "state": "online", 00:31:06.403 "raid_level": "concat", 00:31:06.403 "superblock": true, 00:31:06.403 "num_base_bdevs": 2, 00:31:06.403 "num_base_bdevs_discovered": 2, 00:31:06.403 "num_base_bdevs_operational": 2, 00:31:06.403 "base_bdevs_list": [ 00:31:06.403 { 00:31:06.403 "name": "pt1", 00:31:06.403 "uuid": "0a9d3e3e-787b-5d7a-b5ed-629a6543d3ea", 00:31:06.403 "is_configured": true, 00:31:06.403 "data_offset": 2048, 00:31:06.403 "data_size": 63488 00:31:06.403 }, 00:31:06.403 { 00:31:06.403 "name": "pt2", 00:31:06.403 "uuid": "0346dd70-f04f-5d75-a1b7-60e126fdb65c", 00:31:06.403 "is_configured": true, 00:31:06.403 "data_offset": 2048, 00:31:06.403 "data_size": 63488 00:31:06.403 } 00:31:06.403 ] 00:31:06.403 }' 00:31:06.403 19:24:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:06.403 19:24:22 -- common/autotest_common.sh@10 -- # set +x 00:31:06.969 19:24:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:06.969 19:24:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:31:07.228 [2024-04-18 19:24:23.095142] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:07.228 19:24:23 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=95c13d4d-ce3c-4ff9-a912-299ce40290e6 00:31:07.228 19:24:23 -- bdev/bdev_raid.sh@380 -- # '[' -z 95c13d4d-ce3c-4ff9-a912-299ce40290e6 ']' 00:31:07.228 19:24:23 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:07.496 [2024-04-18 19:24:23.382925] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:07.496 [2024-04-18 19:24:23.383157] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:07.496 [2024-04-18 19:24:23.383350] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:07.496 [2024-04-18 19:24:23.383520] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:07.496 [2024-04-18 19:24:23.383610] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:31:07.496 19:24:23 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:31:07.496 19:24:23 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.065 19:24:23 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:31:08.065 19:24:23 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:31:08.065 19:24:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:31:08.065 19:24:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:08.323 19:24:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:31:08.323 19:24:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:08.581 19:24:24 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:08.581 19:24:24 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:08.839 19:24:24 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:31:08.839 19:24:24 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:31:08.839 19:24:24 -- common/autotest_common.sh@638 -- # local es=0 00:31:08.840 19:24:24 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:31:08.840 19:24:24 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.840 19:24:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:08.840 19:24:24 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.840 19:24:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:08.840 19:24:24 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.840 19:24:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:08.840 19:24:24 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.840 19:24:24 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:08.840 19:24:24 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:31:09.098 [2024-04-18 19:24:24.875222] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:09.098 [2024-04-18 19:24:24.877668] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:09.098 [2024-04-18 19:24:24.877902] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:31:09.098 [2024-04-18 19:24:24.878098] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:31:09.098 [2024-04-18 19:24:24.878167] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:09.098 [2024-04-18 19:24:24.878327] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:31:09.098 request: 00:31:09.098 { 00:31:09.098 "name": "raid_bdev1", 00:31:09.098 "raid_level": "concat", 00:31:09.098 "base_bdevs": [ 00:31:09.098 "malloc1", 00:31:09.098 "malloc2" 00:31:09.098 ], 00:31:09.098 "superblock": false, 00:31:09.098 "strip_size_kb": 64, 00:31:09.098 "method": "bdev_raid_create", 00:31:09.098 "req_id": 1 00:31:09.098 } 00:31:09.098 Got JSON-RPC error response 00:31:09.098 response: 00:31:09.098 { 00:31:09.098 "code": -17, 00:31:09.098 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:09.098 } 00:31:09.098 19:24:24 -- common/autotest_common.sh@641 -- # es=1 00:31:09.098 19:24:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:09.098 19:24:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:09.098 19:24:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:09.098 19:24:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:31:09.098 19:24:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.357 19:24:25 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:31:09.357 19:24:25 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:31:09.357 19:24:25 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:09.614 [2024-04-18 19:24:25.451307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:09.614 [2024-04-18 19:24:25.451639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:09.614 [2024-04-18 19:24:25.451794] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:09.614 [2024-04-18 19:24:25.451908] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:09.614 [2024-04-18 19:24:25.454579] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:09.614 [2024-04-18 19:24:25.454777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:09.614 [2024-04-18 19:24:25.454991] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:31:09.614 [2024-04-18 19:24:25.455163] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:09.614 pt1 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.614 19:24:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.872 19:24:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:09.872 "name": "raid_bdev1", 00:31:09.872 "uuid": "95c13d4d-ce3c-4ff9-a912-299ce40290e6", 00:31:09.872 "strip_size_kb": 64, 00:31:09.872 "state": "configuring", 00:31:09.872 "raid_level": "concat", 00:31:09.872 "superblock": true, 00:31:09.872 "num_base_bdevs": 2, 00:31:09.872 "num_base_bdevs_discovered": 1, 00:31:09.872 "num_base_bdevs_operational": 2, 00:31:09.872 "base_bdevs_list": [ 00:31:09.872 { 00:31:09.872 "name": "pt1", 00:31:09.872 "uuid": "0a9d3e3e-787b-5d7a-b5ed-629a6543d3ea", 00:31:09.872 "is_configured": true, 00:31:09.872 "data_offset": 2048, 00:31:09.872 "data_size": 63488 00:31:09.872 }, 00:31:09.872 { 00:31:09.872 "name": null, 00:31:09.872 "uuid": "0346dd70-f04f-5d75-a1b7-60e126fdb65c", 00:31:09.872 "is_configured": false, 00:31:09.872 "data_offset": 2048, 00:31:09.872 "data_size": 63488 00:31:09.872 } 00:31:09.872 ] 00:31:09.872 }' 00:31:09.872 19:24:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:09.872 19:24:25 -- common/autotest_common.sh@10 -- # set +x 00:31:10.804 19:24:26 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:31:10.804 19:24:26 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:31:10.804 19:24:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:31:10.804 19:24:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:11.062 [2024-04-18 19:24:26.779860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:11.062 [2024-04-18 19:24:26.780236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:11.062 [2024-04-18 19:24:26.780321] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:11.062 [2024-04-18 19:24:26.780596] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:11.062 [2024-04-18 19:24:26.781116] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:11.062 [2024-04-18 19:24:26.781289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:11.062 [2024-04-18 19:24:26.781545] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:31:11.062 [2024-04-18 19:24:26.781694] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:11.062 [2024-04-18 19:24:26.781954] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:31:11.062 [2024-04-18 19:24:26.782074] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:11.062 [2024-04-18 19:24:26.782268] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:31:11.062 [2024-04-18 19:24:26.782648] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:31:11.062 [2024-04-18 19:24:26.782693] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:31:11.062 [2024-04-18 19:24:26.782939] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:11.062 pt2 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.062 19:24:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.319 19:24:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:11.319 "name": "raid_bdev1", 00:31:11.319 "uuid": "95c13d4d-ce3c-4ff9-a912-299ce40290e6", 00:31:11.319 "strip_size_kb": 64, 00:31:11.319 "state": "online", 00:31:11.319 "raid_level": "concat", 00:31:11.319 "superblock": true, 00:31:11.319 "num_base_bdevs": 2, 00:31:11.319 "num_base_bdevs_discovered": 2, 00:31:11.319 "num_base_bdevs_operational": 2, 00:31:11.319 "base_bdevs_list": [ 00:31:11.319 { 00:31:11.319 "name": "pt1", 00:31:11.319 "uuid": "0a9d3e3e-787b-5d7a-b5ed-629a6543d3ea", 00:31:11.319 "is_configured": true, 00:31:11.319 "data_offset": 2048, 00:31:11.319 "data_size": 63488 00:31:11.319 }, 00:31:11.319 { 00:31:11.319 "name": "pt2", 00:31:11.319 "uuid": "0346dd70-f04f-5d75-a1b7-60e126fdb65c", 00:31:11.319 "is_configured": true, 00:31:11.319 "data_offset": 2048, 00:31:11.319 "data_size": 63488 00:31:11.319 } 00:31:11.319 ] 00:31:11.319 }' 00:31:11.319 19:24:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:11.319 19:24:27 -- common/autotest_common.sh@10 -- # set +x 00:31:12.253 19:24:27 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:12.253 19:24:27 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:31:12.253 [2024-04-18 19:24:28.112423] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:12.253 19:24:28 -- bdev/bdev_raid.sh@430 -- # '[' 95c13d4d-ce3c-4ff9-a912-299ce40290e6 '!=' 95c13d4d-ce3c-4ff9-a912-299ce40290e6 ']' 00:31:12.253 19:24:28 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:31:12.253 19:24:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:31:12.253 19:24:28 -- bdev/bdev_raid.sh@197 -- # return 1 00:31:12.253 19:24:28 -- bdev/bdev_raid.sh@511 -- # killprocess 122634 00:31:12.253 19:24:28 -- common/autotest_common.sh@936 -- # '[' -z 122634 ']' 00:31:12.253 19:24:28 -- common/autotest_common.sh@940 -- # kill -0 122634 00:31:12.253 19:24:28 -- common/autotest_common.sh@941 -- # uname 00:31:12.253 19:24:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:12.253 19:24:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122634 00:31:12.253 killing process with pid 122634 00:31:12.253 19:24:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:12.253 19:24:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:12.253 19:24:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122634' 00:31:12.253 19:24:28 -- common/autotest_common.sh@955 -- # kill 122634 00:31:12.253 19:24:28 -- common/autotest_common.sh@960 -- # wait 122634 00:31:12.253 [2024-04-18 19:24:28.155378] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:12.253 [2024-04-18 19:24:28.155477] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:12.253 [2024-04-18 19:24:28.155526] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:12.253 [2024-04-18 19:24:28.155536] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:31:12.512 [2024-04-18 19:24:28.374454] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:13.889 ************************************ 00:31:13.889 END TEST raid_superblock_test 00:31:13.889 ************************************ 00:31:13.889 19:24:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:31:13.889 00:31:13.889 real 0m10.582s 00:31:13.889 user 0m17.899s 00:31:13.889 sys 0m1.329s 00:31:13.889 19:24:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:13.889 19:24:29 -- common/autotest_common.sh@10 -- # set +x 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:31:14.148 19:24:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:31:14.148 19:24:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:14.148 19:24:29 -- common/autotest_common.sh@10 -- # set +x 00:31:14.148 ************************************ 00:31:14.148 START TEST raid_state_function_test 00:31:14.148 ************************************ 00:31:14.148 19:24:29 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 2 false 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=122922 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122922' 00:31:14.148 Process raid pid: 122922 00:31:14.148 19:24:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122922 /var/tmp/spdk-raid.sock 00:31:14.148 19:24:29 -- common/autotest_common.sh@817 -- # '[' -z 122922 ']' 00:31:14.148 19:24:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:14.149 19:24:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:14.149 19:24:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:14.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:14.149 19:24:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:14.149 19:24:29 -- common/autotest_common.sh@10 -- # set +x 00:31:14.149 [2024-04-18 19:24:29.970340] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:31:14.149 [2024-04-18 19:24:29.970678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.407 [2024-04-18 19:24:30.136765] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.665 [2024-04-18 19:24:30.401411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.923 [2024-04-18 19:24:30.642898] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:15.181 19:24:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:15.181 19:24:30 -- common/autotest_common.sh@850 -- # return 0 00:31:15.181 19:24:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:15.439 [2024-04-18 19:24:31.186996] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:15.439 [2024-04-18 19:24:31.187250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:15.439 [2024-04-18 19:24:31.187357] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:15.439 [2024-04-18 19:24:31.187437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.439 19:24:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:15.697 19:24:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:15.697 "name": "Existed_Raid", 00:31:15.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.697 "strip_size_kb": 0, 00:31:15.697 "state": "configuring", 00:31:15.697 "raid_level": "raid1", 00:31:15.697 "superblock": false, 00:31:15.697 "num_base_bdevs": 2, 00:31:15.697 "num_base_bdevs_discovered": 0, 00:31:15.697 "num_base_bdevs_operational": 2, 00:31:15.697 "base_bdevs_list": [ 00:31:15.697 { 00:31:15.697 "name": "BaseBdev1", 00:31:15.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.697 "is_configured": false, 00:31:15.697 "data_offset": 0, 00:31:15.697 "data_size": 0 00:31:15.697 }, 00:31:15.697 { 00:31:15.697 "name": "BaseBdev2", 00:31:15.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.698 "is_configured": false, 00:31:15.698 "data_offset": 0, 00:31:15.698 "data_size": 0 00:31:15.698 } 00:31:15.698 ] 00:31:15.698 }' 00:31:15.698 19:24:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:15.698 19:24:31 -- common/autotest_common.sh@10 -- # set +x 00:31:16.632 19:24:32 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:16.632 [2024-04-18 19:24:32.483158] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:16.632 [2024-04-18 19:24:32.483372] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:31:16.632 19:24:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:16.890 [2024-04-18 19:24:32.687224] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:16.890 [2024-04-18 19:24:32.687530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:16.890 [2024-04-18 19:24:32.687621] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:16.890 [2024-04-18 19:24:32.687676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:16.890 19:24:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:17.148 [2024-04-18 19:24:33.010019] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:17.148 BaseBdev1 00:31:17.148 19:24:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:31:17.148 19:24:33 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:31:17.148 19:24:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:31:17.148 19:24:33 -- common/autotest_common.sh@887 -- # local i 00:31:17.148 19:24:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:31:17.148 19:24:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:31:17.148 19:24:33 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:17.406 19:24:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:17.666 [ 00:31:17.666 { 00:31:17.666 "name": "BaseBdev1", 00:31:17.666 "aliases": [ 00:31:17.666 "cfce0af8-ccc1-4f19-910d-c4699610bac0" 00:31:17.666 ], 00:31:17.666 "product_name": "Malloc disk", 00:31:17.666 "block_size": 512, 00:31:17.666 "num_blocks": 65536, 00:31:17.666 "uuid": "cfce0af8-ccc1-4f19-910d-c4699610bac0", 00:31:17.666 "assigned_rate_limits": { 00:31:17.666 "rw_ios_per_sec": 0, 00:31:17.666 "rw_mbytes_per_sec": 0, 00:31:17.666 "r_mbytes_per_sec": 0, 00:31:17.666 "w_mbytes_per_sec": 0 00:31:17.666 }, 00:31:17.666 "claimed": true, 00:31:17.666 "claim_type": "exclusive_write", 00:31:17.666 "zoned": false, 00:31:17.666 "supported_io_types": { 00:31:17.666 "read": true, 00:31:17.666 "write": true, 00:31:17.666 "unmap": true, 00:31:17.666 "write_zeroes": true, 00:31:17.666 "flush": true, 00:31:17.666 "reset": true, 00:31:17.666 "compare": false, 00:31:17.666 "compare_and_write": false, 00:31:17.666 "abort": true, 00:31:17.666 "nvme_admin": false, 00:31:17.666 "nvme_io": false 00:31:17.666 }, 00:31:17.666 "memory_domains": [ 00:31:17.666 { 00:31:17.666 "dma_device_id": "system", 00:31:17.666 "dma_device_type": 1 00:31:17.666 }, 00:31:17.666 { 00:31:17.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:17.666 "dma_device_type": 2 00:31:17.666 } 00:31:17.666 ], 00:31:17.666 "driver_specific": {} 00:31:17.666 } 00:31:17.666 ] 00:31:17.666 19:24:33 -- common/autotest_common.sh@893 -- # return 0 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.666 19:24:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:17.926 19:24:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:17.926 "name": "Existed_Raid", 00:31:17.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:17.926 "strip_size_kb": 0, 00:31:17.926 "state": "configuring", 00:31:17.926 "raid_level": "raid1", 00:31:17.926 "superblock": false, 00:31:17.926 "num_base_bdevs": 2, 00:31:17.926 "num_base_bdevs_discovered": 1, 00:31:17.926 "num_base_bdevs_operational": 2, 00:31:17.926 "base_bdevs_list": [ 00:31:17.926 { 00:31:17.926 "name": "BaseBdev1", 00:31:17.926 "uuid": "cfce0af8-ccc1-4f19-910d-c4699610bac0", 00:31:17.926 "is_configured": true, 00:31:17.926 "data_offset": 0, 00:31:17.926 "data_size": 65536 00:31:17.926 }, 00:31:17.926 { 00:31:17.926 "name": "BaseBdev2", 00:31:17.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:17.926 "is_configured": false, 00:31:17.926 "data_offset": 0, 00:31:17.926 "data_size": 0 00:31:17.926 } 00:31:17.926 ] 00:31:17.926 }' 00:31:17.926 19:24:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:17.926 19:24:33 -- common/autotest_common.sh@10 -- # set +x 00:31:18.861 19:24:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:18.861 [2024-04-18 19:24:34.678473] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:18.861 [2024-04-18 19:24:34.678703] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:31:18.861 19:24:34 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:31:18.861 19:24:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:19.119 [2024-04-18 19:24:34.982592] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:19.119 [2024-04-18 19:24:34.985033] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:19.119 [2024-04-18 19:24:34.985227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.119 19:24:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:19.378 19:24:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:19.378 "name": "Existed_Raid", 00:31:19.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.378 "strip_size_kb": 0, 00:31:19.378 "state": "configuring", 00:31:19.378 "raid_level": "raid1", 00:31:19.378 "superblock": false, 00:31:19.378 "num_base_bdevs": 2, 00:31:19.378 "num_base_bdevs_discovered": 1, 00:31:19.378 "num_base_bdevs_operational": 2, 00:31:19.378 "base_bdevs_list": [ 00:31:19.378 { 00:31:19.378 "name": "BaseBdev1", 00:31:19.378 "uuid": "cfce0af8-ccc1-4f19-910d-c4699610bac0", 00:31:19.378 "is_configured": true, 00:31:19.378 "data_offset": 0, 00:31:19.378 "data_size": 65536 00:31:19.378 }, 00:31:19.378 { 00:31:19.378 "name": "BaseBdev2", 00:31:19.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.378 "is_configured": false, 00:31:19.378 "data_offset": 0, 00:31:19.378 "data_size": 0 00:31:19.378 } 00:31:19.378 ] 00:31:19.378 }' 00:31:19.378 19:24:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:19.378 19:24:35 -- common/autotest_common.sh@10 -- # set +x 00:31:20.315 19:24:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:20.574 [2024-04-18 19:24:36.270404] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:20.574 [2024-04-18 19:24:36.270645] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:31:20.574 [2024-04-18 19:24:36.270682] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:20.574 [2024-04-18 19:24:36.270872] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:31:20.574 [2024-04-18 19:24:36.271247] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:31:20.574 [2024-04-18 19:24:36.271347] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:31:20.574 [2024-04-18 19:24:36.271689] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:20.574 BaseBdev2 00:31:20.574 19:24:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:31:20.574 19:24:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:31:20.574 19:24:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:31:20.574 19:24:36 -- common/autotest_common.sh@887 -- # local i 00:31:20.574 19:24:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:31:20.574 19:24:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:31:20.574 19:24:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:20.574 19:24:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:20.833 [ 00:31:20.833 { 00:31:20.833 "name": "BaseBdev2", 00:31:20.833 "aliases": [ 00:31:20.833 "e0f69054-d578-4751-b055-07f16d16fb5f" 00:31:20.833 ], 00:31:20.833 "product_name": "Malloc disk", 00:31:20.833 "block_size": 512, 00:31:20.833 "num_blocks": 65536, 00:31:20.833 "uuid": "e0f69054-d578-4751-b055-07f16d16fb5f", 00:31:20.833 "assigned_rate_limits": { 00:31:20.833 "rw_ios_per_sec": 0, 00:31:20.833 "rw_mbytes_per_sec": 0, 00:31:20.833 "r_mbytes_per_sec": 0, 00:31:20.833 "w_mbytes_per_sec": 0 00:31:20.833 }, 00:31:20.833 "claimed": true, 00:31:20.833 "claim_type": "exclusive_write", 00:31:20.833 "zoned": false, 00:31:20.833 "supported_io_types": { 00:31:20.833 "read": true, 00:31:20.833 "write": true, 00:31:20.833 "unmap": true, 00:31:20.833 "write_zeroes": true, 00:31:20.833 "flush": true, 00:31:20.833 "reset": true, 00:31:20.833 "compare": false, 00:31:20.833 "compare_and_write": false, 00:31:20.833 "abort": true, 00:31:20.833 "nvme_admin": false, 00:31:20.833 "nvme_io": false 00:31:20.833 }, 00:31:20.833 "memory_domains": [ 00:31:20.833 { 00:31:20.833 "dma_device_id": "system", 00:31:20.833 "dma_device_type": 1 00:31:20.833 }, 00:31:20.833 { 00:31:20.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.833 "dma_device_type": 2 00:31:20.833 } 00:31:20.833 ], 00:31:20.833 "driver_specific": {} 00:31:20.833 } 00:31:20.833 ] 00:31:20.833 19:24:36 -- common/autotest_common.sh@893 -- # return 0 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:20.833 19:24:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:21.399 19:24:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:21.399 "name": "Existed_Raid", 00:31:21.399 "uuid": "cbcbb01a-c01c-44a9-9fb5-05d66b956f4f", 00:31:21.399 "strip_size_kb": 0, 00:31:21.399 "state": "online", 00:31:21.399 "raid_level": "raid1", 00:31:21.399 "superblock": false, 00:31:21.399 "num_base_bdevs": 2, 00:31:21.399 "num_base_bdevs_discovered": 2, 00:31:21.399 "num_base_bdevs_operational": 2, 00:31:21.399 "base_bdevs_list": [ 00:31:21.399 { 00:31:21.399 "name": "BaseBdev1", 00:31:21.399 "uuid": "cfce0af8-ccc1-4f19-910d-c4699610bac0", 00:31:21.399 "is_configured": true, 00:31:21.399 "data_offset": 0, 00:31:21.399 "data_size": 65536 00:31:21.399 }, 00:31:21.399 { 00:31:21.399 "name": "BaseBdev2", 00:31:21.399 "uuid": "e0f69054-d578-4751-b055-07f16d16fb5f", 00:31:21.399 "is_configured": true, 00:31:21.399 "data_offset": 0, 00:31:21.399 "data_size": 65536 00:31:21.399 } 00:31:21.399 ] 00:31:21.399 }' 00:31:21.399 19:24:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:21.399 19:24:37 -- common/autotest_common.sh@10 -- # set +x 00:31:22.011 19:24:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:22.011 [2024-04-18 19:24:37.906918] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@196 -- # return 0 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.269 19:24:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:22.528 19:24:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:22.528 "name": "Existed_Raid", 00:31:22.528 "uuid": "cbcbb01a-c01c-44a9-9fb5-05d66b956f4f", 00:31:22.528 "strip_size_kb": 0, 00:31:22.528 "state": "online", 00:31:22.528 "raid_level": "raid1", 00:31:22.528 "superblock": false, 00:31:22.528 "num_base_bdevs": 2, 00:31:22.528 "num_base_bdevs_discovered": 1, 00:31:22.528 "num_base_bdevs_operational": 1, 00:31:22.528 "base_bdevs_list": [ 00:31:22.528 { 00:31:22.528 "name": null, 00:31:22.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.528 "is_configured": false, 00:31:22.528 "data_offset": 0, 00:31:22.528 "data_size": 65536 00:31:22.528 }, 00:31:22.528 { 00:31:22.528 "name": "BaseBdev2", 00:31:22.528 "uuid": "e0f69054-d578-4751-b055-07f16d16fb5f", 00:31:22.528 "is_configured": true, 00:31:22.528 "data_offset": 0, 00:31:22.528 "data_size": 65536 00:31:22.528 } 00:31:22.528 ] 00:31:22.528 }' 00:31:22.528 19:24:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:22.528 19:24:38 -- common/autotest_common.sh@10 -- # set +x 00:31:23.462 19:24:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:31:23.462 19:24:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:31:23.462 19:24:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.462 19:24:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:31:23.462 19:24:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:31:23.462 19:24:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:23.462 19:24:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:23.720 [2024-04-18 19:24:39.555827] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:23.720 [2024-04-18 19:24:39.556157] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:23.978 [2024-04-18 19:24:39.666299] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:23.978 [2024-04-18 19:24:39.666620] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:23.978 [2024-04-18 19:24:39.666745] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:31:23.978 19:24:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:31:23.978 19:24:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:31:23.978 19:24:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.978 19:24:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:31:24.237 19:24:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:31:24.237 19:24:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:31:24.237 19:24:39 -- bdev/bdev_raid.sh@287 -- # killprocess 122922 00:31:24.237 19:24:39 -- common/autotest_common.sh@936 -- # '[' -z 122922 ']' 00:31:24.237 19:24:39 -- common/autotest_common.sh@940 -- # kill -0 122922 00:31:24.237 19:24:39 -- common/autotest_common.sh@941 -- # uname 00:31:24.237 19:24:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:24.237 19:24:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122922 00:31:24.237 killing process with pid 122922 00:31:24.237 19:24:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:24.237 19:24:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:24.237 19:24:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122922' 00:31:24.237 19:24:39 -- common/autotest_common.sh@955 -- # kill 122922 00:31:24.237 19:24:39 -- common/autotest_common.sh@960 -- # wait 122922 00:31:24.237 [2024-04-18 19:24:39.934333] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:24.237 [2024-04-18 19:24:39.934475] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:25.613 ************************************ 00:31:25.613 END TEST raid_state_function_test 00:31:25.613 ************************************ 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:31:25.613 00:31:25.613 real 0m11.460s 00:31:25.613 user 0m19.566s 00:31:25.613 sys 0m1.544s 00:31:25.613 19:24:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:25.613 19:24:41 -- common/autotest_common.sh@10 -- # set +x 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:31:25.613 19:24:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:31:25.613 19:24:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:25.613 19:24:41 -- common/autotest_common.sh@10 -- # set +x 00:31:25.613 ************************************ 00:31:25.613 START TEST raid_state_function_test_sb 00:31:25.613 ************************************ 00:31:25.613 19:24:41 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 2 true 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=123285 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123285' 00:31:25.613 Process raid pid: 123285 00:31:25.613 19:24:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123285 /var/tmp/spdk-raid.sock 00:31:25.613 19:24:41 -- common/autotest_common.sh@817 -- # '[' -z 123285 ']' 00:31:25.613 19:24:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:25.613 19:24:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:25.613 19:24:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:25.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:25.613 19:24:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:25.613 19:24:41 -- common/autotest_common.sh@10 -- # set +x 00:31:25.613 [2024-04-18 19:24:41.536718] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:31:25.613 [2024-04-18 19:24:41.537132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.871 [2024-04-18 19:24:41.720960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.148 [2024-04-18 19:24:41.989563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.406 [2024-04-18 19:24:42.221894] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:26.664 19:24:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:26.664 19:24:42 -- common/autotest_common.sh@850 -- # return 0 00:31:26.664 19:24:42 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:26.921 [2024-04-18 19:24:42.755959] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:26.921 [2024-04-18 19:24:42.756223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:26.921 [2024-04-18 19:24:42.756313] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:26.921 [2024-04-18 19:24:42.756366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:26.921 19:24:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.178 19:24:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:27.178 "name": "Existed_Raid", 00:31:27.178 "uuid": "d1fb7ce8-ae50-424a-9141-07f1c6cd3bf7", 00:31:27.178 "strip_size_kb": 0, 00:31:27.178 "state": "configuring", 00:31:27.178 "raid_level": "raid1", 00:31:27.178 "superblock": true, 00:31:27.178 "num_base_bdevs": 2, 00:31:27.178 "num_base_bdevs_discovered": 0, 00:31:27.178 "num_base_bdevs_operational": 2, 00:31:27.178 "base_bdevs_list": [ 00:31:27.178 { 00:31:27.178 "name": "BaseBdev1", 00:31:27.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.178 "is_configured": false, 00:31:27.178 "data_offset": 0, 00:31:27.178 "data_size": 0 00:31:27.178 }, 00:31:27.178 { 00:31:27.178 "name": "BaseBdev2", 00:31:27.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.178 "is_configured": false, 00:31:27.178 "data_offset": 0, 00:31:27.178 "data_size": 0 00:31:27.178 } 00:31:27.178 ] 00:31:27.178 }' 00:31:27.178 19:24:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:27.178 19:24:43 -- common/autotest_common.sh@10 -- # set +x 00:31:28.108 19:24:43 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:28.108 [2024-04-18 19:24:44.020061] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:28.108 [2024-04-18 19:24:44.020271] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:31:28.418 19:24:44 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:28.418 [2024-04-18 19:24:44.252165] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:28.418 [2024-04-18 19:24:44.252414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:28.418 [2024-04-18 19:24:44.252509] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:28.418 [2024-04-18 19:24:44.252567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:28.418 19:24:44 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:28.677 [2024-04-18 19:24:44.575157] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:28.677 BaseBdev1 00:31:28.677 19:24:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:31:28.677 19:24:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:31:28.677 19:24:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:31:28.677 19:24:44 -- common/autotest_common.sh@887 -- # local i 00:31:28.677 19:24:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:31:28.677 19:24:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:31:28.677 19:24:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:29.242 19:24:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:29.242 [ 00:31:29.242 { 00:31:29.242 "name": "BaseBdev1", 00:31:29.242 "aliases": [ 00:31:29.242 "4e548643-fb43-4fa4-875d-ff43de4d553e" 00:31:29.242 ], 00:31:29.242 "product_name": "Malloc disk", 00:31:29.242 "block_size": 512, 00:31:29.242 "num_blocks": 65536, 00:31:29.242 "uuid": "4e548643-fb43-4fa4-875d-ff43de4d553e", 00:31:29.242 "assigned_rate_limits": { 00:31:29.242 "rw_ios_per_sec": 0, 00:31:29.242 "rw_mbytes_per_sec": 0, 00:31:29.242 "r_mbytes_per_sec": 0, 00:31:29.242 "w_mbytes_per_sec": 0 00:31:29.242 }, 00:31:29.242 "claimed": true, 00:31:29.242 "claim_type": "exclusive_write", 00:31:29.242 "zoned": false, 00:31:29.242 "supported_io_types": { 00:31:29.242 "read": true, 00:31:29.242 "write": true, 00:31:29.242 "unmap": true, 00:31:29.242 "write_zeroes": true, 00:31:29.242 "flush": true, 00:31:29.242 "reset": true, 00:31:29.242 "compare": false, 00:31:29.242 "compare_and_write": false, 00:31:29.242 "abort": true, 00:31:29.242 "nvme_admin": false, 00:31:29.242 "nvme_io": false 00:31:29.242 }, 00:31:29.242 "memory_domains": [ 00:31:29.242 { 00:31:29.242 "dma_device_id": "system", 00:31:29.242 "dma_device_type": 1 00:31:29.242 }, 00:31:29.243 { 00:31:29.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:29.243 "dma_device_type": 2 00:31:29.243 } 00:31:29.243 ], 00:31:29.243 "driver_specific": {} 00:31:29.243 } 00:31:29.243 ] 00:31:29.243 19:24:45 -- common/autotest_common.sh@893 -- # return 0 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.243 19:24:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:29.806 19:24:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:29.806 "name": "Existed_Raid", 00:31:29.806 "uuid": "ed7e59e5-0180-4362-87a9-29744655b269", 00:31:29.806 "strip_size_kb": 0, 00:31:29.806 "state": "configuring", 00:31:29.806 "raid_level": "raid1", 00:31:29.806 "superblock": true, 00:31:29.806 "num_base_bdevs": 2, 00:31:29.806 "num_base_bdevs_discovered": 1, 00:31:29.806 "num_base_bdevs_operational": 2, 00:31:29.806 "base_bdevs_list": [ 00:31:29.806 { 00:31:29.806 "name": "BaseBdev1", 00:31:29.806 "uuid": "4e548643-fb43-4fa4-875d-ff43de4d553e", 00:31:29.806 "is_configured": true, 00:31:29.806 "data_offset": 2048, 00:31:29.806 "data_size": 63488 00:31:29.806 }, 00:31:29.806 { 00:31:29.806 "name": "BaseBdev2", 00:31:29.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.806 "is_configured": false, 00:31:29.806 "data_offset": 0, 00:31:29.806 "data_size": 0 00:31:29.806 } 00:31:29.806 ] 00:31:29.806 }' 00:31:29.806 19:24:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:29.806 19:24:45 -- common/autotest_common.sh@10 -- # set +x 00:31:30.372 19:24:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:30.631 [2024-04-18 19:24:46.459697] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:30.631 [2024-04-18 19:24:46.459935] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:31:30.631 19:24:46 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:31:30.631 19:24:46 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:31.226 19:24:46 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:31.485 BaseBdev1 00:31:31.485 19:24:47 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:31:31.485 19:24:47 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:31:31.485 19:24:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:31:31.485 19:24:47 -- common/autotest_common.sh@887 -- # local i 00:31:31.485 19:24:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:31:31.485 19:24:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:31:31.485 19:24:47 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:31.743 19:24:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:32.001 [ 00:31:32.001 { 00:31:32.001 "name": "BaseBdev1", 00:31:32.001 "aliases": [ 00:31:32.001 "9b9e5448-7c7e-401b-b1f8-862f31898d35" 00:31:32.001 ], 00:31:32.001 "product_name": "Malloc disk", 00:31:32.001 "block_size": 512, 00:31:32.001 "num_blocks": 65536, 00:31:32.001 "uuid": "9b9e5448-7c7e-401b-b1f8-862f31898d35", 00:31:32.001 "assigned_rate_limits": { 00:31:32.001 "rw_ios_per_sec": 0, 00:31:32.001 "rw_mbytes_per_sec": 0, 00:31:32.001 "r_mbytes_per_sec": 0, 00:31:32.001 "w_mbytes_per_sec": 0 00:31:32.001 }, 00:31:32.001 "claimed": false, 00:31:32.001 "zoned": false, 00:31:32.001 "supported_io_types": { 00:31:32.001 "read": true, 00:31:32.001 "write": true, 00:31:32.001 "unmap": true, 00:31:32.001 "write_zeroes": true, 00:31:32.001 "flush": true, 00:31:32.001 "reset": true, 00:31:32.001 "compare": false, 00:31:32.001 "compare_and_write": false, 00:31:32.001 "abort": true, 00:31:32.001 "nvme_admin": false, 00:31:32.001 "nvme_io": false 00:31:32.001 }, 00:31:32.001 "memory_domains": [ 00:31:32.001 { 00:31:32.001 "dma_device_id": "system", 00:31:32.001 "dma_device_type": 1 00:31:32.001 }, 00:31:32.001 { 00:31:32.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:32.001 "dma_device_type": 2 00:31:32.001 } 00:31:32.001 ], 00:31:32.001 "driver_specific": {} 00:31:32.001 } 00:31:32.001 ] 00:31:32.001 19:24:47 -- common/autotest_common.sh@893 -- # return 0 00:31:32.001 19:24:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:32.260 [2024-04-18 19:24:48.069387] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:32.260 [2024-04-18 19:24:48.071777] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:32.260 [2024-04-18 19:24:48.071953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.260 19:24:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:32.518 19:24:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:32.518 "name": "Existed_Raid", 00:31:32.518 "uuid": "f1be7a7b-47ad-420b-8752-8a707fe55b37", 00:31:32.518 "strip_size_kb": 0, 00:31:32.518 "state": "configuring", 00:31:32.518 "raid_level": "raid1", 00:31:32.518 "superblock": true, 00:31:32.518 "num_base_bdevs": 2, 00:31:32.518 "num_base_bdevs_discovered": 1, 00:31:32.518 "num_base_bdevs_operational": 2, 00:31:32.518 "base_bdevs_list": [ 00:31:32.518 { 00:31:32.518 "name": "BaseBdev1", 00:31:32.518 "uuid": "9b9e5448-7c7e-401b-b1f8-862f31898d35", 00:31:32.518 "is_configured": true, 00:31:32.518 "data_offset": 2048, 00:31:32.518 "data_size": 63488 00:31:32.518 }, 00:31:32.518 { 00:31:32.518 "name": "BaseBdev2", 00:31:32.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.518 "is_configured": false, 00:31:32.518 "data_offset": 0, 00:31:32.518 "data_size": 0 00:31:32.518 } 00:31:32.518 ] 00:31:32.518 }' 00:31:32.518 19:24:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:32.518 19:24:48 -- common/autotest_common.sh@10 -- # set +x 00:31:33.451 19:24:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:33.709 [2024-04-18 19:24:49.438252] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:33.709 [2024-04-18 19:24:49.438705] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:31:33.709 [2024-04-18 19:24:49.438826] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:33.709 [2024-04-18 19:24:49.439008] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:31:33.709 BaseBdev2 00:31:33.709 [2024-04-18 19:24:49.439484] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:31:33.709 [2024-04-18 19:24:49.439596] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:31:33.709 [2024-04-18 19:24:49.439837] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:33.709 19:24:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:31:33.709 19:24:49 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:31:33.709 19:24:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:31:33.709 19:24:49 -- common/autotest_common.sh@887 -- # local i 00:31:33.709 19:24:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:31:33.709 19:24:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:31:33.709 19:24:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:33.967 19:24:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:34.225 [ 00:31:34.225 { 00:31:34.225 "name": "BaseBdev2", 00:31:34.225 "aliases": [ 00:31:34.225 "fedf25fa-33ab-4314-a23a-d1fdd9f04da2" 00:31:34.225 ], 00:31:34.225 "product_name": "Malloc disk", 00:31:34.225 "block_size": 512, 00:31:34.225 "num_blocks": 65536, 00:31:34.225 "uuid": "fedf25fa-33ab-4314-a23a-d1fdd9f04da2", 00:31:34.225 "assigned_rate_limits": { 00:31:34.225 "rw_ios_per_sec": 0, 00:31:34.225 "rw_mbytes_per_sec": 0, 00:31:34.225 "r_mbytes_per_sec": 0, 00:31:34.225 "w_mbytes_per_sec": 0 00:31:34.225 }, 00:31:34.225 "claimed": true, 00:31:34.225 "claim_type": "exclusive_write", 00:31:34.225 "zoned": false, 00:31:34.225 "supported_io_types": { 00:31:34.225 "read": true, 00:31:34.225 "write": true, 00:31:34.225 "unmap": true, 00:31:34.225 "write_zeroes": true, 00:31:34.225 "flush": true, 00:31:34.225 "reset": true, 00:31:34.225 "compare": false, 00:31:34.225 "compare_and_write": false, 00:31:34.225 "abort": true, 00:31:34.225 "nvme_admin": false, 00:31:34.225 "nvme_io": false 00:31:34.225 }, 00:31:34.225 "memory_domains": [ 00:31:34.225 { 00:31:34.225 "dma_device_id": "system", 00:31:34.225 "dma_device_type": 1 00:31:34.225 }, 00:31:34.225 { 00:31:34.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:34.225 "dma_device_type": 2 00:31:34.225 } 00:31:34.225 ], 00:31:34.225 "driver_specific": {} 00:31:34.225 } 00:31:34.225 ] 00:31:34.225 19:24:49 -- common/autotest_common.sh@893 -- # return 0 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.225 19:24:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:34.482 19:24:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:34.482 "name": "Existed_Raid", 00:31:34.482 "uuid": "f1be7a7b-47ad-420b-8752-8a707fe55b37", 00:31:34.482 "strip_size_kb": 0, 00:31:34.482 "state": "online", 00:31:34.482 "raid_level": "raid1", 00:31:34.482 "superblock": true, 00:31:34.482 "num_base_bdevs": 2, 00:31:34.482 "num_base_bdevs_discovered": 2, 00:31:34.482 "num_base_bdevs_operational": 2, 00:31:34.482 "base_bdevs_list": [ 00:31:34.482 { 00:31:34.482 "name": "BaseBdev1", 00:31:34.482 "uuid": "9b9e5448-7c7e-401b-b1f8-862f31898d35", 00:31:34.482 "is_configured": true, 00:31:34.482 "data_offset": 2048, 00:31:34.482 "data_size": 63488 00:31:34.482 }, 00:31:34.482 { 00:31:34.482 "name": "BaseBdev2", 00:31:34.483 "uuid": "fedf25fa-33ab-4314-a23a-d1fdd9f04da2", 00:31:34.483 "is_configured": true, 00:31:34.483 "data_offset": 2048, 00:31:34.483 "data_size": 63488 00:31:34.483 } 00:31:34.483 ] 00:31:34.483 }' 00:31:34.483 19:24:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:34.483 19:24:50 -- common/autotest_common.sh@10 -- # set +x 00:31:35.048 19:24:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:35.306 [2024-04-18 19:24:51.170951] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.571 19:24:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:35.847 19:24:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:35.848 "name": "Existed_Raid", 00:31:35.848 "uuid": "f1be7a7b-47ad-420b-8752-8a707fe55b37", 00:31:35.848 "strip_size_kb": 0, 00:31:35.848 "state": "online", 00:31:35.848 "raid_level": "raid1", 00:31:35.848 "superblock": true, 00:31:35.848 "num_base_bdevs": 2, 00:31:35.848 "num_base_bdevs_discovered": 1, 00:31:35.848 "num_base_bdevs_operational": 1, 00:31:35.848 "base_bdevs_list": [ 00:31:35.848 { 00:31:35.848 "name": null, 00:31:35.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.848 "is_configured": false, 00:31:35.848 "data_offset": 2048, 00:31:35.848 "data_size": 63488 00:31:35.848 }, 00:31:35.848 { 00:31:35.848 "name": "BaseBdev2", 00:31:35.848 "uuid": "fedf25fa-33ab-4314-a23a-d1fdd9f04da2", 00:31:35.848 "is_configured": true, 00:31:35.848 "data_offset": 2048, 00:31:35.848 "data_size": 63488 00:31:35.848 } 00:31:35.848 ] 00:31:35.848 }' 00:31:35.848 19:24:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:35.848 19:24:51 -- common/autotest_common.sh@10 -- # set +x 00:31:36.415 19:24:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:31:36.415 19:24:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:31:36.415 19:24:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.415 19:24:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:31:36.673 19:24:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:31:36.673 19:24:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:36.673 19:24:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:36.931 [2024-04-18 19:24:52.851183] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:36.931 [2024-04-18 19:24:52.851452] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:37.189 [2024-04-18 19:24:52.957494] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:37.189 [2024-04-18 19:24:52.957807] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:37.189 [2024-04-18 19:24:52.957897] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:31:37.189 19:24:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:31:37.189 19:24:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:31:37.189 19:24:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.189 19:24:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:31:37.445 19:24:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:31:37.445 19:24:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:31:37.445 19:24:53 -- bdev/bdev_raid.sh@287 -- # killprocess 123285 00:31:37.445 19:24:53 -- common/autotest_common.sh@936 -- # '[' -z 123285 ']' 00:31:37.445 19:24:53 -- common/autotest_common.sh@940 -- # kill -0 123285 00:31:37.445 19:24:53 -- common/autotest_common.sh@941 -- # uname 00:31:37.445 19:24:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:37.445 19:24:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123285 00:31:37.445 killing process with pid 123285 00:31:37.445 19:24:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:37.445 19:24:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:37.445 19:24:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123285' 00:31:37.445 19:24:53 -- common/autotest_common.sh@955 -- # kill 123285 00:31:37.445 19:24:53 -- common/autotest_common.sh@960 -- # wait 123285 00:31:37.445 [2024-04-18 19:24:53.217176] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:37.445 [2024-04-18 19:24:53.217313] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:38.818 ************************************ 00:31:38.818 END TEST raid_state_function_test_sb 00:31:38.818 ************************************ 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:31:38.818 00:31:38.818 real 0m13.176s 00:31:38.818 user 0m22.750s 00:31:38.818 sys 0m1.628s 00:31:38.818 19:24:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:38.818 19:24:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:31:38.818 19:24:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:31:38.818 19:24:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:38.818 19:24:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.818 ************************************ 00:31:38.818 START TEST raid_superblock_test 00:31:38.818 ************************************ 00:31:38.818 19:24:54 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 2 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=123660 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:38.818 19:24:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123660 /var/tmp/spdk-raid.sock 00:31:38.818 19:24:54 -- common/autotest_common.sh@817 -- # '[' -z 123660 ']' 00:31:38.818 19:24:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:38.818 19:24:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:38.818 19:24:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:38.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:38.818 19:24:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:38.818 19:24:54 -- common/autotest_common.sh@10 -- # set +x 00:31:39.076 [2024-04-18 19:24:54.800554] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:31:39.077 [2024-04-18 19:24:54.800966] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123660 ] 00:31:39.077 [2024-04-18 19:24:54.980503] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.334 [2024-04-18 19:24:55.195255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.591 [2024-04-18 19:24:55.414102] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:40.158 19:24:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:40.158 19:24:55 -- common/autotest_common.sh@850 -- # return 0 00:31:40.158 19:24:55 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:31:40.158 19:24:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:31:40.158 19:24:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:31:40.158 19:24:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:31:40.158 19:24:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:40.158 19:24:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:40.158 19:24:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:31:40.158 19:24:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:40.158 19:24:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:31:40.416 malloc1 00:31:40.416 19:24:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:40.710 [2024-04-18 19:24:56.505339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:40.710 [2024-04-18 19:24:56.505611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:40.710 [2024-04-18 19:24:56.505676] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:40.710 [2024-04-18 19:24:56.505806] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:40.710 [2024-04-18 19:24:56.508330] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:40.710 [2024-04-18 19:24:56.508494] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:40.710 pt1 00:31:40.710 19:24:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:31:40.710 19:24:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:31:40.710 19:24:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:31:40.710 19:24:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:31:40.710 19:24:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:40.710 19:24:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:40.710 19:24:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:31:40.710 19:24:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:40.710 19:24:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:31:40.979 malloc2 00:31:40.979 19:24:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:41.237 [2024-04-18 19:24:56.998905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:41.237 [2024-04-18 19:24:56.999148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:41.237 [2024-04-18 19:24:56.999327] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:41.237 [2024-04-18 19:24:56.999478] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:41.237 [2024-04-18 19:24:57.002180] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:41.237 [2024-04-18 19:24:57.002344] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:41.237 pt2 00:31:41.237 19:24:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:31:41.237 19:24:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:31:41.237 19:24:57 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:31:41.495 [2024-04-18 19:24:57.243160] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:41.495 [2024-04-18 19:24:57.245446] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:41.495 [2024-04-18 19:24:57.245760] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:31:41.495 [2024-04-18 19:24:57.245884] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:41.495 [2024-04-18 19:24:57.246069] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:31:41.495 [2024-04-18 19:24:57.246562] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:31:41.495 [2024-04-18 19:24:57.246670] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:31:41.495 [2024-04-18 19:24:57.246901] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.495 19:24:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.753 19:24:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:41.753 "name": "raid_bdev1", 00:31:41.753 "uuid": "08f8612b-4219-4adb-b591-070862708b8e", 00:31:41.753 "strip_size_kb": 0, 00:31:41.753 "state": "online", 00:31:41.753 "raid_level": "raid1", 00:31:41.753 "superblock": true, 00:31:41.753 "num_base_bdevs": 2, 00:31:41.753 "num_base_bdevs_discovered": 2, 00:31:41.753 "num_base_bdevs_operational": 2, 00:31:41.753 "base_bdevs_list": [ 00:31:41.753 { 00:31:41.753 "name": "pt1", 00:31:41.753 "uuid": "5cd86a98-c512-5a94-93d5-832863507ca2", 00:31:41.753 "is_configured": true, 00:31:41.753 "data_offset": 2048, 00:31:41.753 "data_size": 63488 00:31:41.753 }, 00:31:41.753 { 00:31:41.753 "name": "pt2", 00:31:41.753 "uuid": "61a9f515-40cb-5aa6-92bc-ea3b0f8734e2", 00:31:41.753 "is_configured": true, 00:31:41.753 "data_offset": 2048, 00:31:41.753 "data_size": 63488 00:31:41.753 } 00:31:41.753 ] 00:31:41.753 }' 00:31:41.753 19:24:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:41.753 19:24:57 -- common/autotest_common.sh@10 -- # set +x 00:31:42.378 19:24:58 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:42.378 19:24:58 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:31:42.636 [2024-04-18 19:24:58.447706] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:42.636 19:24:58 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=08f8612b-4219-4adb-b591-070862708b8e 00:31:42.636 19:24:58 -- bdev/bdev_raid.sh@380 -- # '[' -z 08f8612b-4219-4adb-b591-070862708b8e ']' 00:31:42.636 19:24:58 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:42.894 [2024-04-18 19:24:58.679476] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:42.894 [2024-04-18 19:24:58.679668] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:42.894 [2024-04-18 19:24:58.679837] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:42.894 [2024-04-18 19:24:58.679967] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:42.894 [2024-04-18 19:24:58.680044] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:31:42.894 19:24:58 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.894 19:24:58 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:31:43.151 19:24:58 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:31:43.151 19:24:58 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:31:43.151 19:24:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:31:43.151 19:24:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:43.409 19:24:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:31:43.409 19:24:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:43.666 19:24:59 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:43.666 19:24:59 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:43.923 19:24:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:31:43.923 19:24:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:43.924 19:24:59 -- common/autotest_common.sh@638 -- # local es=0 00:31:43.924 19:24:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:43.924 19:24:59 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:43.924 19:24:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:43.924 19:24:59 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:43.924 19:24:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:43.924 19:24:59 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:43.924 19:24:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:43.924 19:24:59 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:43.924 19:24:59 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:43.924 19:24:59 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:43.924 [2024-04-18 19:24:59.819731] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:43.924 [2024-04-18 19:24:59.822031] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:43.924 [2024-04-18 19:24:59.822221] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:31:43.924 [2024-04-18 19:24:59.822374] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:31:43.924 [2024-04-18 19:24:59.822482] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:43.924 [2024-04-18 19:24:59.822517] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:31:43.924 request: 00:31:43.924 { 00:31:43.924 "name": "raid_bdev1", 00:31:43.924 "raid_level": "raid1", 00:31:43.924 "base_bdevs": [ 00:31:43.924 "malloc1", 00:31:43.924 "malloc2" 00:31:43.924 ], 00:31:43.924 "superblock": false, 00:31:43.924 "method": "bdev_raid_create", 00:31:43.924 "req_id": 1 00:31:43.924 } 00:31:43.924 Got JSON-RPC error response 00:31:43.924 response: 00:31:43.924 { 00:31:43.924 "code": -17, 00:31:43.924 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:43.924 } 00:31:43.924 19:24:59 -- common/autotest_common.sh@641 -- # es=1 00:31:43.924 19:24:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:43.924 19:24:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:43.924 19:24:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:43.924 19:24:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:31:43.924 19:24:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.182 19:25:00 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:31:44.182 19:25:00 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:31:44.182 19:25:00 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:44.440 [2024-04-18 19:25:00.331841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:44.440 [2024-04-18 19:25:00.332155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:44.440 [2024-04-18 19:25:00.332316] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:44.440 [2024-04-18 19:25:00.332414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:44.440 [2024-04-18 19:25:00.334914] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:44.440 [2024-04-18 19:25:00.335079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:44.440 [2024-04-18 19:25:00.335265] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:31:44.440 [2024-04-18 19:25:00.335437] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:44.440 pt1 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.440 19:25:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.699 19:25:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:44.699 "name": "raid_bdev1", 00:31:44.699 "uuid": "08f8612b-4219-4adb-b591-070862708b8e", 00:31:44.699 "strip_size_kb": 0, 00:31:44.699 "state": "configuring", 00:31:44.699 "raid_level": "raid1", 00:31:44.699 "superblock": true, 00:31:44.699 "num_base_bdevs": 2, 00:31:44.699 "num_base_bdevs_discovered": 1, 00:31:44.699 "num_base_bdevs_operational": 2, 00:31:44.699 "base_bdevs_list": [ 00:31:44.699 { 00:31:44.699 "name": "pt1", 00:31:44.699 "uuid": "5cd86a98-c512-5a94-93d5-832863507ca2", 00:31:44.699 "is_configured": true, 00:31:44.699 "data_offset": 2048, 00:31:44.699 "data_size": 63488 00:31:44.699 }, 00:31:44.699 { 00:31:44.699 "name": null, 00:31:44.699 "uuid": "61a9f515-40cb-5aa6-92bc-ea3b0f8734e2", 00:31:44.699 "is_configured": false, 00:31:44.699 "data_offset": 2048, 00:31:44.699 "data_size": 63488 00:31:44.699 } 00:31:44.699 ] 00:31:44.699 }' 00:31:44.699 19:25:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:44.699 19:25:00 -- common/autotest_common.sh@10 -- # set +x 00:31:45.632 19:25:01 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:31:45.632 19:25:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:31:45.632 19:25:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:31:45.632 19:25:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:45.890 [2024-04-18 19:25:01.580123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:45.890 [2024-04-18 19:25:01.580458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:45.890 [2024-04-18 19:25:01.580590] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:45.890 [2024-04-18 19:25:01.580761] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:45.890 [2024-04-18 19:25:01.581282] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:45.890 [2024-04-18 19:25:01.581439] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:45.890 [2024-04-18 19:25:01.581632] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:31:45.890 [2024-04-18 19:25:01.581732] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:45.890 [2024-04-18 19:25:01.581885] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:31:45.890 [2024-04-18 19:25:01.582052] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:45.890 [2024-04-18 19:25:01.582205] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:31:45.890 [2024-04-18 19:25:01.582749] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:31:45.890 [2024-04-18 19:25:01.582855] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:31:45.890 [2024-04-18 19:25:01.583094] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:45.890 pt2 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.890 19:25:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.148 19:25:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:46.148 "name": "raid_bdev1", 00:31:46.148 "uuid": "08f8612b-4219-4adb-b591-070862708b8e", 00:31:46.148 "strip_size_kb": 0, 00:31:46.148 "state": "online", 00:31:46.148 "raid_level": "raid1", 00:31:46.148 "superblock": true, 00:31:46.148 "num_base_bdevs": 2, 00:31:46.148 "num_base_bdevs_discovered": 2, 00:31:46.148 "num_base_bdevs_operational": 2, 00:31:46.148 "base_bdevs_list": [ 00:31:46.148 { 00:31:46.148 "name": "pt1", 00:31:46.148 "uuid": "5cd86a98-c512-5a94-93d5-832863507ca2", 00:31:46.148 "is_configured": true, 00:31:46.148 "data_offset": 2048, 00:31:46.148 "data_size": 63488 00:31:46.148 }, 00:31:46.148 { 00:31:46.148 "name": "pt2", 00:31:46.148 "uuid": "61a9f515-40cb-5aa6-92bc-ea3b0f8734e2", 00:31:46.148 "is_configured": true, 00:31:46.148 "data_offset": 2048, 00:31:46.148 "data_size": 63488 00:31:46.148 } 00:31:46.148 ] 00:31:46.148 }' 00:31:46.148 19:25:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:46.148 19:25:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.717 19:25:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:46.717 19:25:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:31:46.975 [2024-04-18 19:25:02.780669] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:46.975 19:25:02 -- bdev/bdev_raid.sh@430 -- # '[' 08f8612b-4219-4adb-b591-070862708b8e '!=' 08f8612b-4219-4adb-b591-070862708b8e ']' 00:31:46.975 19:25:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:31:46.975 19:25:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:31:46.975 19:25:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:31:46.975 19:25:02 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:47.233 [2024-04-18 19:25:03.024469] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.233 19:25:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.490 19:25:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:47.490 "name": "raid_bdev1", 00:31:47.490 "uuid": "08f8612b-4219-4adb-b591-070862708b8e", 00:31:47.490 "strip_size_kb": 0, 00:31:47.490 "state": "online", 00:31:47.490 "raid_level": "raid1", 00:31:47.490 "superblock": true, 00:31:47.490 "num_base_bdevs": 2, 00:31:47.490 "num_base_bdevs_discovered": 1, 00:31:47.490 "num_base_bdevs_operational": 1, 00:31:47.490 "base_bdevs_list": [ 00:31:47.490 { 00:31:47.490 "name": null, 00:31:47.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.490 "is_configured": false, 00:31:47.490 "data_offset": 2048, 00:31:47.490 "data_size": 63488 00:31:47.490 }, 00:31:47.490 { 00:31:47.490 "name": "pt2", 00:31:47.490 "uuid": "61a9f515-40cb-5aa6-92bc-ea3b0f8734e2", 00:31:47.490 "is_configured": true, 00:31:47.490 "data_offset": 2048, 00:31:47.490 "data_size": 63488 00:31:47.490 } 00:31:47.490 ] 00:31:47.490 }' 00:31:47.490 19:25:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:47.490 19:25:03 -- common/autotest_common.sh@10 -- # set +x 00:31:48.422 19:25:04 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:48.422 [2024-04-18 19:25:04.288744] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:48.422 [2024-04-18 19:25:04.288961] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:48.422 [2024-04-18 19:25:04.289138] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:48.422 [2024-04-18 19:25:04.289296] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:48.422 [2024-04-18 19:25:04.289375] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:31:48.422 19:25:04 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.422 19:25:04 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:31:48.694 19:25:04 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:31:48.694 19:25:04 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:31:48.694 19:25:04 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:31:48.694 19:25:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:31:48.694 19:25:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:48.996 19:25:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:31:48.996 19:25:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:31:48.996 19:25:04 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:31:48.996 19:25:04 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:31:48.996 19:25:04 -- bdev/bdev_raid.sh@462 -- # i=1 00:31:48.996 19:25:04 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:49.254 [2024-04-18 19:25:05.084924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:49.254 [2024-04-18 19:25:05.085168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:49.254 [2024-04-18 19:25:05.085321] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:49.254 [2024-04-18 19:25:05.085441] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:49.254 [2024-04-18 19:25:05.087973] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:49.254 [2024-04-18 19:25:05.088139] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:49.254 [2024-04-18 19:25:05.088332] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:31:49.254 [2024-04-18 19:25:05.088459] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:49.254 [2024-04-18 19:25:05.088610] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:31:49.254 [2024-04-18 19:25:05.088692] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:49.254 [2024-04-18 19:25:05.088841] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:31:49.254 [2024-04-18 19:25:05.089261] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:31:49.254 [2024-04-18 19:25:05.089370] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:31:49.254 [2024-04-18 19:25:05.089617] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:49.254 pt2 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:49.254 19:25:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:49.255 19:25:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.255 19:25:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.513 19:25:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:49.513 "name": "raid_bdev1", 00:31:49.513 "uuid": "08f8612b-4219-4adb-b591-070862708b8e", 00:31:49.513 "strip_size_kb": 0, 00:31:49.513 "state": "online", 00:31:49.513 "raid_level": "raid1", 00:31:49.513 "superblock": true, 00:31:49.513 "num_base_bdevs": 2, 00:31:49.513 "num_base_bdevs_discovered": 1, 00:31:49.513 "num_base_bdevs_operational": 1, 00:31:49.513 "base_bdevs_list": [ 00:31:49.513 { 00:31:49.513 "name": null, 00:31:49.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.513 "is_configured": false, 00:31:49.513 "data_offset": 2048, 00:31:49.513 "data_size": 63488 00:31:49.513 }, 00:31:49.513 { 00:31:49.513 "name": "pt2", 00:31:49.513 "uuid": "61a9f515-40cb-5aa6-92bc-ea3b0f8734e2", 00:31:49.513 "is_configured": true, 00:31:49.513 "data_offset": 2048, 00:31:49.513 "data_size": 63488 00:31:49.513 } 00:31:49.513 ] 00:31:49.513 }' 00:31:49.513 19:25:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:49.513 19:25:05 -- common/autotest_common.sh@10 -- # set +x 00:31:50.447 19:25:06 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:31:50.447 19:25:06 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:50.447 19:25:06 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:31:50.447 [2024-04-18 19:25:06.358070] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:50.713 19:25:06 -- bdev/bdev_raid.sh@506 -- # '[' 08f8612b-4219-4adb-b591-070862708b8e '!=' 08f8612b-4219-4adb-b591-070862708b8e ']' 00:31:50.713 19:25:06 -- bdev/bdev_raid.sh@511 -- # killprocess 123660 00:31:50.713 19:25:06 -- common/autotest_common.sh@936 -- # '[' -z 123660 ']' 00:31:50.713 19:25:06 -- common/autotest_common.sh@940 -- # kill -0 123660 00:31:50.713 19:25:06 -- common/autotest_common.sh@941 -- # uname 00:31:50.713 19:25:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:50.713 19:25:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123660 00:31:50.713 killing process with pid 123660 00:31:50.713 19:25:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:50.713 19:25:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:50.713 19:25:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123660' 00:31:50.713 19:25:06 -- common/autotest_common.sh@955 -- # kill 123660 00:31:50.713 19:25:06 -- common/autotest_common.sh@960 -- # wait 123660 00:31:50.713 [2024-04-18 19:25:06.399465] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:50.713 [2024-04-18 19:25:06.399538] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:50.713 [2024-04-18 19:25:06.399589] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:50.713 [2024-04-18 19:25:06.399599] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:31:50.713 [2024-04-18 19:25:06.617100] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:52.616 ************************************ 00:31:52.616 END TEST raid_superblock_test 00:31:52.616 ************************************ 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@513 -- # return 0 00:31:52.616 00:31:52.616 real 0m13.301s 00:31:52.616 user 0m23.340s 00:31:52.616 sys 0m1.600s 00:31:52.616 19:25:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:52.616 19:25:08 -- common/autotest_common.sh@10 -- # set +x 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:31:52.616 19:25:08 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:31:52.616 19:25:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:52.616 19:25:08 -- common/autotest_common.sh@10 -- # set +x 00:31:52.616 ************************************ 00:31:52.616 START TEST raid_state_function_test 00:31:52.616 ************************************ 00:31:52.616 19:25:08 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 3 false 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=124057 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124057' 00:31:52.616 Process raid pid: 124057 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124057 /var/tmp/spdk-raid.sock 00:31:52.616 19:25:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:52.616 19:25:08 -- common/autotest_common.sh@817 -- # '[' -z 124057 ']' 00:31:52.616 19:25:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:52.616 19:25:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:52.616 19:25:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:52.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:52.616 19:25:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:52.616 19:25:08 -- common/autotest_common.sh@10 -- # set +x 00:31:52.616 [2024-04-18 19:25:08.185763] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:31:52.616 [2024-04-18 19:25:08.186082] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:52.616 [2024-04-18 19:25:08.350893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.874 [2024-04-18 19:25:08.563413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.874 [2024-04-18 19:25:08.781436] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:53.441 19:25:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:53.441 19:25:09 -- common/autotest_common.sh@850 -- # return 0 00:31:53.441 19:25:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:53.698 [2024-04-18 19:25:09.426975] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:53.698 [2024-04-18 19:25:09.427255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:53.698 [2024-04-18 19:25:09.427354] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:53.698 [2024-04-18 19:25:09.427420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:53.698 [2024-04-18 19:25:09.427501] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:53.698 [2024-04-18 19:25:09.427573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:53.698 19:25:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.955 19:25:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:53.955 "name": "Existed_Raid", 00:31:53.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:53.955 "strip_size_kb": 64, 00:31:53.955 "state": "configuring", 00:31:53.955 "raid_level": "raid0", 00:31:53.955 "superblock": false, 00:31:53.956 "num_base_bdevs": 3, 00:31:53.956 "num_base_bdevs_discovered": 0, 00:31:53.956 "num_base_bdevs_operational": 3, 00:31:53.956 "base_bdevs_list": [ 00:31:53.956 { 00:31:53.956 "name": "BaseBdev1", 00:31:53.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:53.956 "is_configured": false, 00:31:53.956 "data_offset": 0, 00:31:53.956 "data_size": 0 00:31:53.956 }, 00:31:53.956 { 00:31:53.956 "name": "BaseBdev2", 00:31:53.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:53.956 "is_configured": false, 00:31:53.956 "data_offset": 0, 00:31:53.956 "data_size": 0 00:31:53.956 }, 00:31:53.956 { 00:31:53.956 "name": "BaseBdev3", 00:31:53.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:53.956 "is_configured": false, 00:31:53.956 "data_offset": 0, 00:31:53.956 "data_size": 0 00:31:53.956 } 00:31:53.956 ] 00:31:53.956 }' 00:31:53.956 19:25:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:53.956 19:25:09 -- common/autotest_common.sh@10 -- # set +x 00:31:54.521 19:25:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:54.780 [2024-04-18 19:25:10.579053] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:54.780 [2024-04-18 19:25:10.579250] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:31:54.780 19:25:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:55.038 [2024-04-18 19:25:10.863157] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:55.038 [2024-04-18 19:25:10.863401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:55.038 [2024-04-18 19:25:10.863489] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:55.038 [2024-04-18 19:25:10.863543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:55.038 [2024-04-18 19:25:10.863617] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:55.038 [2024-04-18 19:25:10.863668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:55.038 19:25:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:55.295 [2024-04-18 19:25:11.162114] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:55.295 BaseBdev1 00:31:55.295 19:25:11 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:31:55.295 19:25:11 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:31:55.295 19:25:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:31:55.295 19:25:11 -- common/autotest_common.sh@887 -- # local i 00:31:55.295 19:25:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:31:55.295 19:25:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:31:55.295 19:25:11 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:55.552 19:25:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:55.810 [ 00:31:55.810 { 00:31:55.810 "name": "BaseBdev1", 00:31:55.810 "aliases": [ 00:31:55.810 "e8e31af5-7640-4fcd-89df-acd8904651ec" 00:31:55.810 ], 00:31:55.810 "product_name": "Malloc disk", 00:31:55.810 "block_size": 512, 00:31:55.810 "num_blocks": 65536, 00:31:55.810 "uuid": "e8e31af5-7640-4fcd-89df-acd8904651ec", 00:31:55.810 "assigned_rate_limits": { 00:31:55.810 "rw_ios_per_sec": 0, 00:31:55.810 "rw_mbytes_per_sec": 0, 00:31:55.810 "r_mbytes_per_sec": 0, 00:31:55.810 "w_mbytes_per_sec": 0 00:31:55.810 }, 00:31:55.810 "claimed": true, 00:31:55.810 "claim_type": "exclusive_write", 00:31:55.810 "zoned": false, 00:31:55.810 "supported_io_types": { 00:31:55.810 "read": true, 00:31:55.810 "write": true, 00:31:55.810 "unmap": true, 00:31:55.810 "write_zeroes": true, 00:31:55.810 "flush": true, 00:31:55.810 "reset": true, 00:31:55.810 "compare": false, 00:31:55.810 "compare_and_write": false, 00:31:55.810 "abort": true, 00:31:55.810 "nvme_admin": false, 00:31:55.810 "nvme_io": false 00:31:55.810 }, 00:31:55.810 "memory_domains": [ 00:31:55.810 { 00:31:55.810 "dma_device_id": "system", 00:31:55.810 "dma_device_type": 1 00:31:55.810 }, 00:31:55.810 { 00:31:55.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:55.810 "dma_device_type": 2 00:31:55.810 } 00:31:55.810 ], 00:31:55.810 "driver_specific": {} 00:31:55.810 } 00:31:55.810 ] 00:31:55.810 19:25:11 -- common/autotest_common.sh@893 -- # return 0 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:55.810 19:25:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.068 19:25:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:56.068 "name": "Existed_Raid", 00:31:56.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.068 "strip_size_kb": 64, 00:31:56.068 "state": "configuring", 00:31:56.068 "raid_level": "raid0", 00:31:56.068 "superblock": false, 00:31:56.068 "num_base_bdevs": 3, 00:31:56.068 "num_base_bdevs_discovered": 1, 00:31:56.068 "num_base_bdevs_operational": 3, 00:31:56.068 "base_bdevs_list": [ 00:31:56.068 { 00:31:56.068 "name": "BaseBdev1", 00:31:56.068 "uuid": "e8e31af5-7640-4fcd-89df-acd8904651ec", 00:31:56.068 "is_configured": true, 00:31:56.068 "data_offset": 0, 00:31:56.068 "data_size": 65536 00:31:56.068 }, 00:31:56.068 { 00:31:56.068 "name": "BaseBdev2", 00:31:56.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.068 "is_configured": false, 00:31:56.068 "data_offset": 0, 00:31:56.068 "data_size": 0 00:31:56.068 }, 00:31:56.068 { 00:31:56.068 "name": "BaseBdev3", 00:31:56.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.068 "is_configured": false, 00:31:56.068 "data_offset": 0, 00:31:56.068 "data_size": 0 00:31:56.068 } 00:31:56.068 ] 00:31:56.068 }' 00:31:56.068 19:25:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:56.068 19:25:11 -- common/autotest_common.sh@10 -- # set +x 00:31:56.634 19:25:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:56.893 [2024-04-18 19:25:12.794548] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:56.893 [2024-04-18 19:25:12.794788] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:31:56.893 19:25:12 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:31:56.893 19:25:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:57.153 [2024-04-18 19:25:13.066667] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:57.153 [2024-04-18 19:25:13.068873] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:57.153 [2024-04-18 19:25:13.069045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:57.153 [2024-04-18 19:25:13.069129] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:57.153 [2024-04-18 19:25:13.069225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:57.414 19:25:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:31:57.414 19:25:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:31:57.414 19:25:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.415 19:25:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:57.698 19:25:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:57.698 "name": "Existed_Raid", 00:31:57.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.698 "strip_size_kb": 64, 00:31:57.698 "state": "configuring", 00:31:57.698 "raid_level": "raid0", 00:31:57.698 "superblock": false, 00:31:57.698 "num_base_bdevs": 3, 00:31:57.698 "num_base_bdevs_discovered": 1, 00:31:57.698 "num_base_bdevs_operational": 3, 00:31:57.698 "base_bdevs_list": [ 00:31:57.698 { 00:31:57.698 "name": "BaseBdev1", 00:31:57.698 "uuid": "e8e31af5-7640-4fcd-89df-acd8904651ec", 00:31:57.698 "is_configured": true, 00:31:57.698 "data_offset": 0, 00:31:57.698 "data_size": 65536 00:31:57.698 }, 00:31:57.698 { 00:31:57.698 "name": "BaseBdev2", 00:31:57.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.698 "is_configured": false, 00:31:57.698 "data_offset": 0, 00:31:57.698 "data_size": 0 00:31:57.698 }, 00:31:57.698 { 00:31:57.698 "name": "BaseBdev3", 00:31:57.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.698 "is_configured": false, 00:31:57.698 "data_offset": 0, 00:31:57.698 "data_size": 0 00:31:57.698 } 00:31:57.698 ] 00:31:57.698 }' 00:31:57.698 19:25:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:57.698 19:25:13 -- common/autotest_common.sh@10 -- # set +x 00:31:58.273 19:25:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:58.533 [2024-04-18 19:25:14.351069] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:58.533 BaseBdev2 00:31:58.533 19:25:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:31:58.533 19:25:14 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:31:58.533 19:25:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:31:58.533 19:25:14 -- common/autotest_common.sh@887 -- # local i 00:31:58.533 19:25:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:31:58.533 19:25:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:31:58.533 19:25:14 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:58.791 19:25:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:59.050 [ 00:31:59.050 { 00:31:59.050 "name": "BaseBdev2", 00:31:59.050 "aliases": [ 00:31:59.050 "a9cdaba6-9cfd-4736-a606-0b4a441d622e" 00:31:59.050 ], 00:31:59.050 "product_name": "Malloc disk", 00:31:59.050 "block_size": 512, 00:31:59.050 "num_blocks": 65536, 00:31:59.050 "uuid": "a9cdaba6-9cfd-4736-a606-0b4a441d622e", 00:31:59.050 "assigned_rate_limits": { 00:31:59.050 "rw_ios_per_sec": 0, 00:31:59.050 "rw_mbytes_per_sec": 0, 00:31:59.050 "r_mbytes_per_sec": 0, 00:31:59.050 "w_mbytes_per_sec": 0 00:31:59.050 }, 00:31:59.050 "claimed": true, 00:31:59.050 "claim_type": "exclusive_write", 00:31:59.050 "zoned": false, 00:31:59.050 "supported_io_types": { 00:31:59.050 "read": true, 00:31:59.050 "write": true, 00:31:59.050 "unmap": true, 00:31:59.050 "write_zeroes": true, 00:31:59.050 "flush": true, 00:31:59.050 "reset": true, 00:31:59.050 "compare": false, 00:31:59.050 "compare_and_write": false, 00:31:59.050 "abort": true, 00:31:59.050 "nvme_admin": false, 00:31:59.050 "nvme_io": false 00:31:59.050 }, 00:31:59.050 "memory_domains": [ 00:31:59.050 { 00:31:59.050 "dma_device_id": "system", 00:31:59.050 "dma_device_type": 1 00:31:59.050 }, 00:31:59.050 { 00:31:59.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:59.050 "dma_device_type": 2 00:31:59.050 } 00:31:59.050 ], 00:31:59.050 "driver_specific": {} 00:31:59.050 } 00:31:59.050 ] 00:31:59.050 19:25:14 -- common/autotest_common.sh@893 -- # return 0 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.050 19:25:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:59.309 19:25:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:31:59.309 "name": "Existed_Raid", 00:31:59.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.309 "strip_size_kb": 64, 00:31:59.309 "state": "configuring", 00:31:59.309 "raid_level": "raid0", 00:31:59.309 "superblock": false, 00:31:59.309 "num_base_bdevs": 3, 00:31:59.309 "num_base_bdevs_discovered": 2, 00:31:59.309 "num_base_bdevs_operational": 3, 00:31:59.309 "base_bdevs_list": [ 00:31:59.309 { 00:31:59.309 "name": "BaseBdev1", 00:31:59.309 "uuid": "e8e31af5-7640-4fcd-89df-acd8904651ec", 00:31:59.309 "is_configured": true, 00:31:59.309 "data_offset": 0, 00:31:59.309 "data_size": 65536 00:31:59.309 }, 00:31:59.309 { 00:31:59.309 "name": "BaseBdev2", 00:31:59.309 "uuid": "a9cdaba6-9cfd-4736-a606-0b4a441d622e", 00:31:59.309 "is_configured": true, 00:31:59.309 "data_offset": 0, 00:31:59.309 "data_size": 65536 00:31:59.309 }, 00:31:59.309 { 00:31:59.309 "name": "BaseBdev3", 00:31:59.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.309 "is_configured": false, 00:31:59.309 "data_offset": 0, 00:31:59.309 "data_size": 0 00:31:59.309 } 00:31:59.309 ] 00:31:59.309 }' 00:31:59.309 19:25:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:31:59.309 19:25:15 -- common/autotest_common.sh@10 -- # set +x 00:32:00.244 19:25:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:00.244 [2024-04-18 19:25:16.104127] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:00.244 [2024-04-18 19:25:16.104377] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:32:00.244 [2024-04-18 19:25:16.104417] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:00.244 [2024-04-18 19:25:16.104650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:32:00.244 [2024-04-18 19:25:16.105101] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:32:00.244 [2024-04-18 19:25:16.105248] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:32:00.244 [2024-04-18 19:25:16.105636] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:00.244 BaseBdev3 00:32:00.244 19:25:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:32:00.244 19:25:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:32:00.244 19:25:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:00.244 19:25:16 -- common/autotest_common.sh@887 -- # local i 00:32:00.244 19:25:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:00.244 19:25:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:00.244 19:25:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:00.503 19:25:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:00.761 [ 00:32:00.761 { 00:32:00.761 "name": "BaseBdev3", 00:32:00.761 "aliases": [ 00:32:00.761 "9f39a7d1-c6df-4ca8-bb09-f814785ae791" 00:32:00.761 ], 00:32:00.761 "product_name": "Malloc disk", 00:32:00.761 "block_size": 512, 00:32:00.761 "num_blocks": 65536, 00:32:00.761 "uuid": "9f39a7d1-c6df-4ca8-bb09-f814785ae791", 00:32:00.761 "assigned_rate_limits": { 00:32:00.761 "rw_ios_per_sec": 0, 00:32:00.761 "rw_mbytes_per_sec": 0, 00:32:00.761 "r_mbytes_per_sec": 0, 00:32:00.761 "w_mbytes_per_sec": 0 00:32:00.761 }, 00:32:00.761 "claimed": true, 00:32:00.761 "claim_type": "exclusive_write", 00:32:00.761 "zoned": false, 00:32:00.761 "supported_io_types": { 00:32:00.761 "read": true, 00:32:00.761 "write": true, 00:32:00.761 "unmap": true, 00:32:00.761 "write_zeroes": true, 00:32:00.761 "flush": true, 00:32:00.761 "reset": true, 00:32:00.761 "compare": false, 00:32:00.761 "compare_and_write": false, 00:32:00.761 "abort": true, 00:32:00.761 "nvme_admin": false, 00:32:00.761 "nvme_io": false 00:32:00.761 }, 00:32:00.761 "memory_domains": [ 00:32:00.761 { 00:32:00.761 "dma_device_id": "system", 00:32:00.761 "dma_device_type": 1 00:32:00.761 }, 00:32:00.761 { 00:32:00.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.761 "dma_device_type": 2 00:32:00.761 } 00:32:00.761 ], 00:32:00.761 "driver_specific": {} 00:32:00.761 } 00:32:00.761 ] 00:32:00.761 19:25:16 -- common/autotest_common.sh@893 -- # return 0 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.761 19:25:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:01.020 19:25:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:01.020 "name": "Existed_Raid", 00:32:01.020 "uuid": "1c04af6a-c3c0-40d2-b591-f88212475788", 00:32:01.020 "strip_size_kb": 64, 00:32:01.020 "state": "online", 00:32:01.020 "raid_level": "raid0", 00:32:01.020 "superblock": false, 00:32:01.020 "num_base_bdevs": 3, 00:32:01.020 "num_base_bdevs_discovered": 3, 00:32:01.020 "num_base_bdevs_operational": 3, 00:32:01.020 "base_bdevs_list": [ 00:32:01.020 { 00:32:01.020 "name": "BaseBdev1", 00:32:01.020 "uuid": "e8e31af5-7640-4fcd-89df-acd8904651ec", 00:32:01.020 "is_configured": true, 00:32:01.020 "data_offset": 0, 00:32:01.020 "data_size": 65536 00:32:01.020 }, 00:32:01.020 { 00:32:01.020 "name": "BaseBdev2", 00:32:01.020 "uuid": "a9cdaba6-9cfd-4736-a606-0b4a441d622e", 00:32:01.020 "is_configured": true, 00:32:01.020 "data_offset": 0, 00:32:01.020 "data_size": 65536 00:32:01.020 }, 00:32:01.020 { 00:32:01.020 "name": "BaseBdev3", 00:32:01.020 "uuid": "9f39a7d1-c6df-4ca8-bb09-f814785ae791", 00:32:01.020 "is_configured": true, 00:32:01.020 "data_offset": 0, 00:32:01.020 "data_size": 65536 00:32:01.020 } 00:32:01.020 ] 00:32:01.020 }' 00:32:01.020 19:25:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:01.020 19:25:16 -- common/autotest_common.sh@10 -- # set +x 00:32:01.955 19:25:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:02.213 [2024-04-18 19:25:17.884711] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:02.213 [2024-04-18 19:25:17.884891] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:02.213 [2024-04-18 19:25:17.885044] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@197 -- # return 1 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.213 19:25:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:02.471 19:25:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:02.471 "name": "Existed_Raid", 00:32:02.471 "uuid": "1c04af6a-c3c0-40d2-b591-f88212475788", 00:32:02.471 "strip_size_kb": 64, 00:32:02.472 "state": "offline", 00:32:02.472 "raid_level": "raid0", 00:32:02.472 "superblock": false, 00:32:02.472 "num_base_bdevs": 3, 00:32:02.472 "num_base_bdevs_discovered": 2, 00:32:02.472 "num_base_bdevs_operational": 2, 00:32:02.472 "base_bdevs_list": [ 00:32:02.472 { 00:32:02.472 "name": null, 00:32:02.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.472 "is_configured": false, 00:32:02.472 "data_offset": 0, 00:32:02.472 "data_size": 65536 00:32:02.472 }, 00:32:02.472 { 00:32:02.472 "name": "BaseBdev2", 00:32:02.472 "uuid": "a9cdaba6-9cfd-4736-a606-0b4a441d622e", 00:32:02.472 "is_configured": true, 00:32:02.472 "data_offset": 0, 00:32:02.472 "data_size": 65536 00:32:02.472 }, 00:32:02.472 { 00:32:02.472 "name": "BaseBdev3", 00:32:02.472 "uuid": "9f39a7d1-c6df-4ca8-bb09-f814785ae791", 00:32:02.472 "is_configured": true, 00:32:02.472 "data_offset": 0, 00:32:02.472 "data_size": 65536 00:32:02.472 } 00:32:02.472 ] 00:32:02.472 }' 00:32:02.472 19:25:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:02.472 19:25:18 -- common/autotest_common.sh@10 -- # set +x 00:32:03.064 19:25:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:32:03.064 19:25:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:32:03.064 19:25:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.064 19:25:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:32:03.320 19:25:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:32:03.320 19:25:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:03.320 19:25:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:03.578 [2024-04-18 19:25:19.486415] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:03.834 19:25:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:32:03.834 19:25:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:32:03.834 19:25:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.834 19:25:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:32:04.093 19:25:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:32:04.093 19:25:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:04.093 19:25:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:04.352 [2024-04-18 19:25:20.189073] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:04.352 [2024-04-18 19:25:20.189313] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:32:04.610 19:25:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:32:04.610 19:25:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:32:04.610 19:25:20 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.610 19:25:20 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:32:04.869 19:25:20 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:32:04.869 19:25:20 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:32:04.869 19:25:20 -- bdev/bdev_raid.sh@287 -- # killprocess 124057 00:32:04.869 19:25:20 -- common/autotest_common.sh@936 -- # '[' -z 124057 ']' 00:32:04.869 19:25:20 -- common/autotest_common.sh@940 -- # kill -0 124057 00:32:04.869 19:25:20 -- common/autotest_common.sh@941 -- # uname 00:32:04.869 19:25:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:04.869 19:25:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124057 00:32:04.869 killing process with pid 124057 00:32:04.869 19:25:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:04.869 19:25:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:04.869 19:25:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124057' 00:32:04.869 19:25:20 -- common/autotest_common.sh@955 -- # kill 124057 00:32:04.869 19:25:20 -- common/autotest_common.sh@960 -- # wait 124057 00:32:04.869 [2024-04-18 19:25:20.633327] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:04.869 [2024-04-18 19:25:20.633463] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:06.243 ************************************ 00:32:06.243 END TEST raid_state_function_test 00:32:06.243 ************************************ 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:32:06.243 00:32:06.243 real 0m13.938s 00:32:06.243 user 0m24.178s 00:32:06.243 sys 0m1.773s 00:32:06.243 19:25:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:06.243 19:25:22 -- common/autotest_common.sh@10 -- # set +x 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:32:06.243 19:25:22 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:32:06.243 19:25:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:06.243 19:25:22 -- common/autotest_common.sh@10 -- # set +x 00:32:06.243 ************************************ 00:32:06.243 START TEST raid_state_function_test_sb 00:32:06.243 ************************************ 00:32:06.243 19:25:22 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 3 true 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=124494 00:32:06.243 Process raid pid: 124494 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124494' 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124494 /var/tmp/spdk-raid.sock 00:32:06.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:06.243 19:25:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:06.243 19:25:22 -- common/autotest_common.sh@817 -- # '[' -z 124494 ']' 00:32:06.243 19:25:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:06.243 19:25:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:06.243 19:25:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:06.243 19:25:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:06.243 19:25:22 -- common/autotest_common.sh@10 -- # set +x 00:32:06.515 [2024-04-18 19:25:22.228722] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:32:06.515 [2024-04-18 19:25:22.228929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.515 [2024-04-18 19:25:22.414672] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.773 [2024-04-18 19:25:22.679640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.031 [2024-04-18 19:25:22.907983] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:07.289 19:25:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:07.289 19:25:23 -- common/autotest_common.sh@850 -- # return 0 00:32:07.289 19:25:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:07.546 [2024-04-18 19:25:23.424082] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:07.546 [2024-04-18 19:25:23.424163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:07.546 [2024-04-18 19:25:23.424179] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:07.546 [2024-04-18 19:25:23.424201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:07.546 [2024-04-18 19:25:23.424212] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:07.546 [2024-04-18 19:25:23.424259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.546 19:25:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:08.110 19:25:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:08.110 "name": "Existed_Raid", 00:32:08.110 "uuid": "2d84bed5-3cbd-429a-a547-440a61577628", 00:32:08.110 "strip_size_kb": 64, 00:32:08.110 "state": "configuring", 00:32:08.110 "raid_level": "raid0", 00:32:08.110 "superblock": true, 00:32:08.110 "num_base_bdevs": 3, 00:32:08.110 "num_base_bdevs_discovered": 0, 00:32:08.110 "num_base_bdevs_operational": 3, 00:32:08.110 "base_bdevs_list": [ 00:32:08.110 { 00:32:08.110 "name": "BaseBdev1", 00:32:08.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.110 "is_configured": false, 00:32:08.110 "data_offset": 0, 00:32:08.110 "data_size": 0 00:32:08.110 }, 00:32:08.110 { 00:32:08.110 "name": "BaseBdev2", 00:32:08.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.110 "is_configured": false, 00:32:08.110 "data_offset": 0, 00:32:08.110 "data_size": 0 00:32:08.110 }, 00:32:08.110 { 00:32:08.110 "name": "BaseBdev3", 00:32:08.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.110 "is_configured": false, 00:32:08.110 "data_offset": 0, 00:32:08.110 "data_size": 0 00:32:08.110 } 00:32:08.110 ] 00:32:08.110 }' 00:32:08.110 19:25:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:08.110 19:25:23 -- common/autotest_common.sh@10 -- # set +x 00:32:08.708 19:25:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:08.967 [2024-04-18 19:25:24.832165] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:08.967 [2024-04-18 19:25:24.832212] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:32:08.967 19:25:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:09.226 [2024-04-18 19:25:25.116257] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:09.226 [2024-04-18 19:25:25.116334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:09.226 [2024-04-18 19:25:25.116346] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:09.226 [2024-04-18 19:25:25.116374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:09.226 [2024-04-18 19:25:25.116382] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:09.226 [2024-04-18 19:25:25.116410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:09.226 19:25:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:09.484 [2024-04-18 19:25:25.370755] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:09.484 BaseBdev1 00:32:09.484 19:25:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:32:09.484 19:25:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:32:09.484 19:25:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:09.484 19:25:25 -- common/autotest_common.sh@887 -- # local i 00:32:09.484 19:25:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:09.484 19:25:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:09.484 19:25:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:09.742 19:25:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:09.998 [ 00:32:09.998 { 00:32:09.998 "name": "BaseBdev1", 00:32:09.998 "aliases": [ 00:32:09.998 "84bd3440-0fc1-482d-bbb0-f4102d5ee00c" 00:32:09.998 ], 00:32:09.998 "product_name": "Malloc disk", 00:32:09.998 "block_size": 512, 00:32:09.998 "num_blocks": 65536, 00:32:09.998 "uuid": "84bd3440-0fc1-482d-bbb0-f4102d5ee00c", 00:32:09.998 "assigned_rate_limits": { 00:32:09.998 "rw_ios_per_sec": 0, 00:32:09.998 "rw_mbytes_per_sec": 0, 00:32:09.998 "r_mbytes_per_sec": 0, 00:32:09.998 "w_mbytes_per_sec": 0 00:32:09.998 }, 00:32:09.998 "claimed": true, 00:32:09.998 "claim_type": "exclusive_write", 00:32:09.998 "zoned": false, 00:32:09.998 "supported_io_types": { 00:32:09.998 "read": true, 00:32:09.998 "write": true, 00:32:09.998 "unmap": true, 00:32:09.998 "write_zeroes": true, 00:32:09.998 "flush": true, 00:32:09.998 "reset": true, 00:32:09.998 "compare": false, 00:32:09.998 "compare_and_write": false, 00:32:09.998 "abort": true, 00:32:09.998 "nvme_admin": false, 00:32:09.998 "nvme_io": false 00:32:09.998 }, 00:32:09.998 "memory_domains": [ 00:32:09.998 { 00:32:09.998 "dma_device_id": "system", 00:32:09.998 "dma_device_type": 1 00:32:09.998 }, 00:32:09.998 { 00:32:09.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.998 "dma_device_type": 2 00:32:09.998 } 00:32:09.998 ], 00:32:09.998 "driver_specific": {} 00:32:09.998 } 00:32:09.998 ] 00:32:09.998 19:25:25 -- common/autotest_common.sh@893 -- # return 0 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:09.998 19:25:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:10.256 19:25:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:10.256 "name": "Existed_Raid", 00:32:10.256 "uuid": "08a32df3-ae69-4d6c-9cf0-2b0b01c206dd", 00:32:10.256 "strip_size_kb": 64, 00:32:10.256 "state": "configuring", 00:32:10.256 "raid_level": "raid0", 00:32:10.256 "superblock": true, 00:32:10.256 "num_base_bdevs": 3, 00:32:10.256 "num_base_bdevs_discovered": 1, 00:32:10.256 "num_base_bdevs_operational": 3, 00:32:10.256 "base_bdevs_list": [ 00:32:10.256 { 00:32:10.256 "name": "BaseBdev1", 00:32:10.256 "uuid": "84bd3440-0fc1-482d-bbb0-f4102d5ee00c", 00:32:10.256 "is_configured": true, 00:32:10.256 "data_offset": 2048, 00:32:10.256 "data_size": 63488 00:32:10.256 }, 00:32:10.256 { 00:32:10.256 "name": "BaseBdev2", 00:32:10.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.256 "is_configured": false, 00:32:10.256 "data_offset": 0, 00:32:10.256 "data_size": 0 00:32:10.256 }, 00:32:10.256 { 00:32:10.256 "name": "BaseBdev3", 00:32:10.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.256 "is_configured": false, 00:32:10.256 "data_offset": 0, 00:32:10.256 "data_size": 0 00:32:10.256 } 00:32:10.256 ] 00:32:10.256 }' 00:32:10.256 19:25:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:10.256 19:25:26 -- common/autotest_common.sh@10 -- # set +x 00:32:11.242 19:25:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:11.242 [2024-04-18 19:25:27.095185] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:11.242 [2024-04-18 19:25:27.095253] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:32:11.242 19:25:27 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:32:11.242 19:25:27 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:11.806 19:25:27 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:12.063 BaseBdev1 00:32:12.063 19:25:27 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:32:12.063 19:25:27 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:32:12.063 19:25:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:12.063 19:25:27 -- common/autotest_common.sh@887 -- # local i 00:32:12.063 19:25:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:12.063 19:25:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:12.063 19:25:27 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:12.063 19:25:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:12.630 [ 00:32:12.630 { 00:32:12.630 "name": "BaseBdev1", 00:32:12.630 "aliases": [ 00:32:12.630 "2b3d0af0-3c4d-43e7-a362-fe95eaa9a3b9" 00:32:12.630 ], 00:32:12.630 "product_name": "Malloc disk", 00:32:12.630 "block_size": 512, 00:32:12.630 "num_blocks": 65536, 00:32:12.630 "uuid": "2b3d0af0-3c4d-43e7-a362-fe95eaa9a3b9", 00:32:12.630 "assigned_rate_limits": { 00:32:12.630 "rw_ios_per_sec": 0, 00:32:12.630 "rw_mbytes_per_sec": 0, 00:32:12.630 "r_mbytes_per_sec": 0, 00:32:12.630 "w_mbytes_per_sec": 0 00:32:12.630 }, 00:32:12.630 "claimed": false, 00:32:12.630 "zoned": false, 00:32:12.630 "supported_io_types": { 00:32:12.630 "read": true, 00:32:12.630 "write": true, 00:32:12.630 "unmap": true, 00:32:12.630 "write_zeroes": true, 00:32:12.630 "flush": true, 00:32:12.630 "reset": true, 00:32:12.630 "compare": false, 00:32:12.630 "compare_and_write": false, 00:32:12.630 "abort": true, 00:32:12.630 "nvme_admin": false, 00:32:12.630 "nvme_io": false 00:32:12.630 }, 00:32:12.630 "memory_domains": [ 00:32:12.630 { 00:32:12.630 "dma_device_id": "system", 00:32:12.630 "dma_device_type": 1 00:32:12.630 }, 00:32:12.630 { 00:32:12.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.630 "dma_device_type": 2 00:32:12.630 } 00:32:12.631 ], 00:32:12.631 "driver_specific": {} 00:32:12.631 } 00:32:12.631 ] 00:32:12.631 19:25:28 -- common/autotest_common.sh@893 -- # return 0 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:12.631 [2024-04-18 19:25:28.477934] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:12.631 [2024-04-18 19:25:28.480109] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:12.631 [2024-04-18 19:25:28.480173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:12.631 [2024-04-18 19:25:28.480184] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:12.631 [2024-04-18 19:25:28.480209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.631 19:25:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:12.889 19:25:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:12.889 "name": "Existed_Raid", 00:32:12.889 "uuid": "b822aa64-ddad-4c0f-9f99-d19ff355a2a3", 00:32:12.889 "strip_size_kb": 64, 00:32:12.889 "state": "configuring", 00:32:12.889 "raid_level": "raid0", 00:32:12.889 "superblock": true, 00:32:12.889 "num_base_bdevs": 3, 00:32:12.889 "num_base_bdevs_discovered": 1, 00:32:12.889 "num_base_bdevs_operational": 3, 00:32:12.889 "base_bdevs_list": [ 00:32:12.889 { 00:32:12.889 "name": "BaseBdev1", 00:32:12.889 "uuid": "2b3d0af0-3c4d-43e7-a362-fe95eaa9a3b9", 00:32:12.889 "is_configured": true, 00:32:12.889 "data_offset": 2048, 00:32:12.889 "data_size": 63488 00:32:12.889 }, 00:32:12.889 { 00:32:12.889 "name": "BaseBdev2", 00:32:12.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.889 "is_configured": false, 00:32:12.889 "data_offset": 0, 00:32:12.889 "data_size": 0 00:32:12.889 }, 00:32:12.889 { 00:32:12.889 "name": "BaseBdev3", 00:32:12.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.889 "is_configured": false, 00:32:12.889 "data_offset": 0, 00:32:12.889 "data_size": 0 00:32:12.889 } 00:32:12.889 ] 00:32:12.889 }' 00:32:12.889 19:25:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:12.889 19:25:28 -- common/autotest_common.sh@10 -- # set +x 00:32:13.916 19:25:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:13.916 [2024-04-18 19:25:29.765460] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:13.916 BaseBdev2 00:32:13.916 19:25:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:32:13.916 19:25:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:32:13.916 19:25:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:13.916 19:25:29 -- common/autotest_common.sh@887 -- # local i 00:32:13.916 19:25:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:13.916 19:25:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:13.916 19:25:29 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:14.175 19:25:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:14.433 [ 00:32:14.433 { 00:32:14.433 "name": "BaseBdev2", 00:32:14.433 "aliases": [ 00:32:14.433 "f085429b-a2ef-4f2e-855c-fd42e191a4ef" 00:32:14.433 ], 00:32:14.433 "product_name": "Malloc disk", 00:32:14.433 "block_size": 512, 00:32:14.433 "num_blocks": 65536, 00:32:14.433 "uuid": "f085429b-a2ef-4f2e-855c-fd42e191a4ef", 00:32:14.433 "assigned_rate_limits": { 00:32:14.433 "rw_ios_per_sec": 0, 00:32:14.433 "rw_mbytes_per_sec": 0, 00:32:14.433 "r_mbytes_per_sec": 0, 00:32:14.433 "w_mbytes_per_sec": 0 00:32:14.433 }, 00:32:14.433 "claimed": true, 00:32:14.433 "claim_type": "exclusive_write", 00:32:14.433 "zoned": false, 00:32:14.433 "supported_io_types": { 00:32:14.433 "read": true, 00:32:14.433 "write": true, 00:32:14.433 "unmap": true, 00:32:14.433 "write_zeroes": true, 00:32:14.433 "flush": true, 00:32:14.433 "reset": true, 00:32:14.433 "compare": false, 00:32:14.433 "compare_and_write": false, 00:32:14.433 "abort": true, 00:32:14.433 "nvme_admin": false, 00:32:14.433 "nvme_io": false 00:32:14.433 }, 00:32:14.433 "memory_domains": [ 00:32:14.433 { 00:32:14.433 "dma_device_id": "system", 00:32:14.433 "dma_device_type": 1 00:32:14.433 }, 00:32:14.433 { 00:32:14.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.433 "dma_device_type": 2 00:32:14.433 } 00:32:14.433 ], 00:32:14.433 "driver_specific": {} 00:32:14.433 } 00:32:14.433 ] 00:32:14.433 19:25:30 -- common/autotest_common.sh@893 -- # return 0 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:14.433 19:25:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.691 19:25:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:14.691 "name": "Existed_Raid", 00:32:14.691 "uuid": "b822aa64-ddad-4c0f-9f99-d19ff355a2a3", 00:32:14.691 "strip_size_kb": 64, 00:32:14.691 "state": "configuring", 00:32:14.691 "raid_level": "raid0", 00:32:14.691 "superblock": true, 00:32:14.691 "num_base_bdevs": 3, 00:32:14.691 "num_base_bdevs_discovered": 2, 00:32:14.691 "num_base_bdevs_operational": 3, 00:32:14.691 "base_bdevs_list": [ 00:32:14.691 { 00:32:14.691 "name": "BaseBdev1", 00:32:14.691 "uuid": "2b3d0af0-3c4d-43e7-a362-fe95eaa9a3b9", 00:32:14.691 "is_configured": true, 00:32:14.691 "data_offset": 2048, 00:32:14.692 "data_size": 63488 00:32:14.692 }, 00:32:14.692 { 00:32:14.692 "name": "BaseBdev2", 00:32:14.692 "uuid": "f085429b-a2ef-4f2e-855c-fd42e191a4ef", 00:32:14.692 "is_configured": true, 00:32:14.692 "data_offset": 2048, 00:32:14.692 "data_size": 63488 00:32:14.692 }, 00:32:14.692 { 00:32:14.692 "name": "BaseBdev3", 00:32:14.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.692 "is_configured": false, 00:32:14.692 "data_offset": 0, 00:32:14.692 "data_size": 0 00:32:14.692 } 00:32:14.692 ] 00:32:14.692 }' 00:32:14.692 19:25:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:14.692 19:25:30 -- common/autotest_common.sh@10 -- # set +x 00:32:15.626 19:25:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:15.883 [2024-04-18 19:25:31.640729] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:15.883 [2024-04-18 19:25:31.640959] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:32:15.883 [2024-04-18 19:25:31.640973] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:15.883 [2024-04-18 19:25:31.641153] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:32:15.883 BaseBdev3 00:32:15.883 [2024-04-18 19:25:31.641495] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:32:15.883 [2024-04-18 19:25:31.641518] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:32:15.883 [2024-04-18 19:25:31.641679] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:15.883 19:25:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:32:15.883 19:25:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:32:15.883 19:25:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:15.883 19:25:31 -- common/autotest_common.sh@887 -- # local i 00:32:15.883 19:25:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:15.883 19:25:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:15.883 19:25:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:16.142 19:25:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:16.472 [ 00:32:16.472 { 00:32:16.472 "name": "BaseBdev3", 00:32:16.472 "aliases": [ 00:32:16.472 "85f8989a-8ec3-4819-863d-e524fb67154e" 00:32:16.472 ], 00:32:16.472 "product_name": "Malloc disk", 00:32:16.472 "block_size": 512, 00:32:16.472 "num_blocks": 65536, 00:32:16.473 "uuid": "85f8989a-8ec3-4819-863d-e524fb67154e", 00:32:16.473 "assigned_rate_limits": { 00:32:16.473 "rw_ios_per_sec": 0, 00:32:16.473 "rw_mbytes_per_sec": 0, 00:32:16.473 "r_mbytes_per_sec": 0, 00:32:16.473 "w_mbytes_per_sec": 0 00:32:16.473 }, 00:32:16.473 "claimed": true, 00:32:16.473 "claim_type": "exclusive_write", 00:32:16.473 "zoned": false, 00:32:16.473 "supported_io_types": { 00:32:16.473 "read": true, 00:32:16.473 "write": true, 00:32:16.473 "unmap": true, 00:32:16.473 "write_zeroes": true, 00:32:16.473 "flush": true, 00:32:16.473 "reset": true, 00:32:16.473 "compare": false, 00:32:16.473 "compare_and_write": false, 00:32:16.473 "abort": true, 00:32:16.473 "nvme_admin": false, 00:32:16.473 "nvme_io": false 00:32:16.473 }, 00:32:16.473 "memory_domains": [ 00:32:16.473 { 00:32:16.473 "dma_device_id": "system", 00:32:16.473 "dma_device_type": 1 00:32:16.473 }, 00:32:16.473 { 00:32:16.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.473 "dma_device_type": 2 00:32:16.473 } 00:32:16.473 ], 00:32:16.473 "driver_specific": {} 00:32:16.473 } 00:32:16.473 ] 00:32:16.473 19:25:32 -- common/autotest_common.sh@893 -- # return 0 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:16.473 19:25:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:16.737 19:25:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:16.737 "name": "Existed_Raid", 00:32:16.737 "uuid": "b822aa64-ddad-4c0f-9f99-d19ff355a2a3", 00:32:16.737 "strip_size_kb": 64, 00:32:16.737 "state": "online", 00:32:16.737 "raid_level": "raid0", 00:32:16.737 "superblock": true, 00:32:16.737 "num_base_bdevs": 3, 00:32:16.737 "num_base_bdevs_discovered": 3, 00:32:16.737 "num_base_bdevs_operational": 3, 00:32:16.737 "base_bdevs_list": [ 00:32:16.737 { 00:32:16.737 "name": "BaseBdev1", 00:32:16.737 "uuid": "2b3d0af0-3c4d-43e7-a362-fe95eaa9a3b9", 00:32:16.737 "is_configured": true, 00:32:16.737 "data_offset": 2048, 00:32:16.737 "data_size": 63488 00:32:16.737 }, 00:32:16.737 { 00:32:16.737 "name": "BaseBdev2", 00:32:16.737 "uuid": "f085429b-a2ef-4f2e-855c-fd42e191a4ef", 00:32:16.737 "is_configured": true, 00:32:16.737 "data_offset": 2048, 00:32:16.737 "data_size": 63488 00:32:16.737 }, 00:32:16.737 { 00:32:16.737 "name": "BaseBdev3", 00:32:16.737 "uuid": "85f8989a-8ec3-4819-863d-e524fb67154e", 00:32:16.737 "is_configured": true, 00:32:16.737 "data_offset": 2048, 00:32:16.737 "data_size": 63488 00:32:16.737 } 00:32:16.737 ] 00:32:16.737 }' 00:32:16.737 19:25:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:16.737 19:25:32 -- common/autotest_common.sh@10 -- # set +x 00:32:17.308 19:25:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:17.565 [2024-04-18 19:25:33.385294] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:17.565 [2024-04-18 19:25:33.385339] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:17.565 [2024-04-18 19:25:33.385395] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.822 19:25:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:18.080 19:25:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:18.080 "name": "Existed_Raid", 00:32:18.080 "uuid": "b822aa64-ddad-4c0f-9f99-d19ff355a2a3", 00:32:18.080 "strip_size_kb": 64, 00:32:18.080 "state": "offline", 00:32:18.080 "raid_level": "raid0", 00:32:18.080 "superblock": true, 00:32:18.080 "num_base_bdevs": 3, 00:32:18.080 "num_base_bdevs_discovered": 2, 00:32:18.080 "num_base_bdevs_operational": 2, 00:32:18.080 "base_bdevs_list": [ 00:32:18.080 { 00:32:18.080 "name": null, 00:32:18.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.080 "is_configured": false, 00:32:18.080 "data_offset": 2048, 00:32:18.080 "data_size": 63488 00:32:18.080 }, 00:32:18.080 { 00:32:18.080 "name": "BaseBdev2", 00:32:18.080 "uuid": "f085429b-a2ef-4f2e-855c-fd42e191a4ef", 00:32:18.080 "is_configured": true, 00:32:18.080 "data_offset": 2048, 00:32:18.080 "data_size": 63488 00:32:18.080 }, 00:32:18.080 { 00:32:18.080 "name": "BaseBdev3", 00:32:18.080 "uuid": "85f8989a-8ec3-4819-863d-e524fb67154e", 00:32:18.080 "is_configured": true, 00:32:18.080 "data_offset": 2048, 00:32:18.080 "data_size": 63488 00:32:18.080 } 00:32:18.080 ] 00:32:18.080 }' 00:32:18.080 19:25:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:18.080 19:25:33 -- common/autotest_common.sh@10 -- # set +x 00:32:18.645 19:25:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:32:18.645 19:25:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:32:18.645 19:25:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.645 19:25:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:32:18.941 19:25:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:32:18.941 19:25:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:18.941 19:25:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:19.199 [2024-04-18 19:25:35.113488] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:19.458 19:25:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:32:19.458 19:25:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:32:19.458 19:25:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.458 19:25:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:32:19.715 19:25:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:32:19.715 19:25:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:19.715 19:25:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:19.973 [2024-04-18 19:25:35.835427] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:19.973 [2024-04-18 19:25:35.835499] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:32:20.232 19:25:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:32:20.232 19:25:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:32:20.232 19:25:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.232 19:25:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:32:20.490 19:25:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:32:20.490 19:25:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:32:20.490 19:25:36 -- bdev/bdev_raid.sh@287 -- # killprocess 124494 00:32:20.490 19:25:36 -- common/autotest_common.sh@936 -- # '[' -z 124494 ']' 00:32:20.490 19:25:36 -- common/autotest_common.sh@940 -- # kill -0 124494 00:32:20.490 19:25:36 -- common/autotest_common.sh@941 -- # uname 00:32:20.490 19:25:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:20.490 19:25:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124494 00:32:20.490 killing process with pid 124494 00:32:20.490 19:25:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:20.490 19:25:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:20.490 19:25:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124494' 00:32:20.490 19:25:36 -- common/autotest_common.sh@955 -- # kill 124494 00:32:20.490 19:25:36 -- common/autotest_common.sh@960 -- # wait 124494 00:32:20.490 [2024-04-18 19:25:36.291535] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:20.490 [2024-04-18 19:25:36.291685] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:21.865 ************************************ 00:32:21.865 END TEST raid_state_function_test_sb 00:32:21.865 ************************************ 00:32:21.865 19:25:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:32:21.865 00:32:21.865 real 0m15.586s 00:32:21.865 user 0m27.174s 00:32:21.865 sys 0m2.060s 00:32:21.865 19:25:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:21.865 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:32:21.865 19:25:37 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:32:21.865 19:25:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:32:21.865 19:25:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:21.865 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:32:22.123 ************************************ 00:32:22.123 START TEST raid_superblock_test 00:32:22.123 ************************************ 00:32:22.123 19:25:37 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 3 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@357 -- # raid_pid=124942 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:22.123 19:25:37 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124942 /var/tmp/spdk-raid.sock 00:32:22.123 19:25:37 -- common/autotest_common.sh@817 -- # '[' -z 124942 ']' 00:32:22.123 19:25:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:22.123 19:25:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:22.123 19:25:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:22.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:22.123 19:25:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:22.123 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:32:22.123 [2024-04-18 19:25:37.908971] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:32:22.123 [2024-04-18 19:25:37.909199] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124942 ] 00:32:22.381 [2024-04-18 19:25:38.081138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.381 [2024-04-18 19:25:38.291557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.639 [2024-04-18 19:25:38.500865] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:23.203 19:25:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:23.203 19:25:38 -- common/autotest_common.sh@850 -- # return 0 00:32:23.203 19:25:38 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:32:23.203 19:25:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:32:23.203 19:25:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:32:23.203 19:25:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:32:23.203 19:25:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:23.203 19:25:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:23.203 19:25:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:32:23.203 19:25:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:23.203 19:25:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:32:23.203 malloc1 00:32:23.203 19:25:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:23.462 [2024-04-18 19:25:39.371905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:23.462 [2024-04-18 19:25:39.372004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:23.462 [2024-04-18 19:25:39.372037] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:32:23.462 [2024-04-18 19:25:39.372090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:23.462 [2024-04-18 19:25:39.374642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:23.462 [2024-04-18 19:25:39.374696] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:23.462 pt1 00:32:23.462 19:25:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:32:23.462 19:25:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:32:23.462 19:25:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:32:23.462 19:25:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:32:23.462 19:25:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:23.462 19:25:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:23.462 19:25:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:32:23.462 19:25:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:23.462 19:25:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:32:24.070 malloc2 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:24.070 [2024-04-18 19:25:39.873867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:24.070 [2024-04-18 19:25:39.873957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:24.070 [2024-04-18 19:25:39.874000] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:32:24.070 [2024-04-18 19:25:39.874056] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:24.070 [2024-04-18 19:25:39.876588] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:24.070 [2024-04-18 19:25:39.876636] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:24.070 pt2 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:24.070 19:25:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:32:24.358 malloc3 00:32:24.358 19:25:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:24.616 [2024-04-18 19:25:40.480397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:24.616 [2024-04-18 19:25:40.480484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:24.616 [2024-04-18 19:25:40.480525] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:24.616 [2024-04-18 19:25:40.480572] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:24.616 [2024-04-18 19:25:40.483120] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:24.616 [2024-04-18 19:25:40.483180] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:24.616 pt3 00:32:24.616 19:25:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:32:24.616 19:25:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:32:24.616 19:25:40 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:32:24.874 [2024-04-18 19:25:40.792498] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:24.874 [2024-04-18 19:25:40.794708] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:24.874 [2024-04-18 19:25:40.794779] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:24.874 [2024-04-18 19:25:40.794976] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:32:24.874 [2024-04-18 19:25:40.794994] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:24.874 [2024-04-18 19:25:40.795148] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:32:24.874 [2024-04-18 19:25:40.795542] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:32:24.874 [2024-04-18 19:25:40.795563] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:32:24.874 [2024-04-18 19:25:40.795700] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.132 19:25:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.390 19:25:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:25.390 "name": "raid_bdev1", 00:32:25.390 "uuid": "7f25c7e0-510f-4a3c-a99e-bd01934ed369", 00:32:25.390 "strip_size_kb": 64, 00:32:25.390 "state": "online", 00:32:25.390 "raid_level": "raid0", 00:32:25.390 "superblock": true, 00:32:25.390 "num_base_bdevs": 3, 00:32:25.390 "num_base_bdevs_discovered": 3, 00:32:25.390 "num_base_bdevs_operational": 3, 00:32:25.390 "base_bdevs_list": [ 00:32:25.390 { 00:32:25.390 "name": "pt1", 00:32:25.390 "uuid": "ea9080a5-1ccd-529d-8a54-c348b45aaa95", 00:32:25.390 "is_configured": true, 00:32:25.390 "data_offset": 2048, 00:32:25.391 "data_size": 63488 00:32:25.391 }, 00:32:25.391 { 00:32:25.391 "name": "pt2", 00:32:25.391 "uuid": "5433f411-5adc-5591-8715-59dfd893a02a", 00:32:25.391 "is_configured": true, 00:32:25.391 "data_offset": 2048, 00:32:25.391 "data_size": 63488 00:32:25.391 }, 00:32:25.391 { 00:32:25.391 "name": "pt3", 00:32:25.391 "uuid": "f3ac7872-140c-5686-a036-6aefdd07db16", 00:32:25.391 "is_configured": true, 00:32:25.391 "data_offset": 2048, 00:32:25.391 "data_size": 63488 00:32:25.391 } 00:32:25.391 ] 00:32:25.391 }' 00:32:25.391 19:25:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:25.391 19:25:41 -- common/autotest_common.sh@10 -- # set +x 00:32:25.955 19:25:41 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:32:25.955 19:25:41 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:26.520 [2024-04-18 19:25:42.217071] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:26.520 19:25:42 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7f25c7e0-510f-4a3c-a99e-bd01934ed369 00:32:26.520 19:25:42 -- bdev/bdev_raid.sh@380 -- # '[' -z 7f25c7e0-510f-4a3c-a99e-bd01934ed369 ']' 00:32:26.520 19:25:42 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:26.779 [2024-04-18 19:25:42.525085] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:26.779 [2024-04-18 19:25:42.525144] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:26.779 [2024-04-18 19:25:42.525318] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:26.779 [2024-04-18 19:25:42.525404] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:26.779 [2024-04-18 19:25:42.525418] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:32:26.779 19:25:42 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.779 19:25:42 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:32:27.037 19:25:42 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:32:27.037 19:25:42 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:32:27.037 19:25:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:32:27.037 19:25:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:27.295 19:25:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:32:27.295 19:25:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:27.553 19:25:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:32:27.553 19:25:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:32:27.811 19:25:43 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:27.811 19:25:43 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:28.069 19:25:43 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:32:28.069 19:25:43 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:32:28.069 19:25:43 -- common/autotest_common.sh@638 -- # local es=0 00:32:28.069 19:25:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:32:28.069 19:25:43 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:28.069 19:25:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:28.069 19:25:43 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:28.069 19:25:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:28.069 19:25:43 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:28.069 19:25:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:28.069 19:25:43 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:28.069 19:25:43 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:28.069 19:25:43 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:32:28.328 [2024-04-18 19:25:44.085215] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:28.328 [2024-04-18 19:25:44.087388] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:28.328 [2024-04-18 19:25:44.087446] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:28.328 [2024-04-18 19:25:44.087493] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:32:28.328 [2024-04-18 19:25:44.087562] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:32:28.328 [2024-04-18 19:25:44.087590] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:32:28.328 [2024-04-18 19:25:44.087675] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:28.328 [2024-04-18 19:25:44.087693] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:32:28.328 request: 00:32:28.328 { 00:32:28.328 "name": "raid_bdev1", 00:32:28.328 "raid_level": "raid0", 00:32:28.328 "base_bdevs": [ 00:32:28.328 "malloc1", 00:32:28.328 "malloc2", 00:32:28.328 "malloc3" 00:32:28.328 ], 00:32:28.328 "superblock": false, 00:32:28.328 "strip_size_kb": 64, 00:32:28.328 "method": "bdev_raid_create", 00:32:28.328 "req_id": 1 00:32:28.328 } 00:32:28.328 Got JSON-RPC error response 00:32:28.328 response: 00:32:28.328 { 00:32:28.328 "code": -17, 00:32:28.328 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:28.328 } 00:32:28.328 19:25:44 -- common/autotest_common.sh@641 -- # es=1 00:32:28.328 19:25:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:28.328 19:25:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:28.328 19:25:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:28.328 19:25:44 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.328 19:25:44 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:32:28.631 19:25:44 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:32:28.631 19:25:44 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:32:28.631 19:25:44 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:28.888 [2024-04-18 19:25:44.665247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:28.888 [2024-04-18 19:25:44.665361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:28.888 [2024-04-18 19:25:44.665399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:28.888 [2024-04-18 19:25:44.665421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:28.888 [2024-04-18 19:25:44.667934] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:28.888 [2024-04-18 19:25:44.667986] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:28.888 [2024-04-18 19:25:44.668105] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:32:28.888 [2024-04-18 19:25:44.668149] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:28.888 pt1 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.888 19:25:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.146 19:25:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:29.146 "name": "raid_bdev1", 00:32:29.146 "uuid": "7f25c7e0-510f-4a3c-a99e-bd01934ed369", 00:32:29.146 "strip_size_kb": 64, 00:32:29.146 "state": "configuring", 00:32:29.146 "raid_level": "raid0", 00:32:29.146 "superblock": true, 00:32:29.146 "num_base_bdevs": 3, 00:32:29.146 "num_base_bdevs_discovered": 1, 00:32:29.146 "num_base_bdevs_operational": 3, 00:32:29.146 "base_bdevs_list": [ 00:32:29.146 { 00:32:29.146 "name": "pt1", 00:32:29.146 "uuid": "ea9080a5-1ccd-529d-8a54-c348b45aaa95", 00:32:29.146 "is_configured": true, 00:32:29.146 "data_offset": 2048, 00:32:29.146 "data_size": 63488 00:32:29.146 }, 00:32:29.146 { 00:32:29.146 "name": null, 00:32:29.146 "uuid": "5433f411-5adc-5591-8715-59dfd893a02a", 00:32:29.146 "is_configured": false, 00:32:29.146 "data_offset": 2048, 00:32:29.146 "data_size": 63488 00:32:29.146 }, 00:32:29.146 { 00:32:29.146 "name": null, 00:32:29.146 "uuid": "f3ac7872-140c-5686-a036-6aefdd07db16", 00:32:29.146 "is_configured": false, 00:32:29.146 "data_offset": 2048, 00:32:29.146 "data_size": 63488 00:32:29.146 } 00:32:29.146 ] 00:32:29.146 }' 00:32:29.146 19:25:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:29.146 19:25:44 -- common/autotest_common.sh@10 -- # set +x 00:32:30.077 19:25:45 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:32:30.077 19:25:45 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:30.077 [2024-04-18 19:25:45.901469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:30.077 [2024-04-18 19:25:45.901554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.077 [2024-04-18 19:25:45.901597] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:32:30.077 [2024-04-18 19:25:45.901618] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.077 [2024-04-18 19:25:45.902089] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.077 [2024-04-18 19:25:45.902126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:30.077 [2024-04-18 19:25:45.902244] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:32:30.077 [2024-04-18 19:25:45.902268] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:30.077 pt2 00:32:30.077 19:25:45 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:30.335 [2024-04-18 19:25:46.141560] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.335 19:25:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.592 19:25:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:30.592 "name": "raid_bdev1", 00:32:30.592 "uuid": "7f25c7e0-510f-4a3c-a99e-bd01934ed369", 00:32:30.592 "strip_size_kb": 64, 00:32:30.592 "state": "configuring", 00:32:30.592 "raid_level": "raid0", 00:32:30.592 "superblock": true, 00:32:30.592 "num_base_bdevs": 3, 00:32:30.592 "num_base_bdevs_discovered": 1, 00:32:30.592 "num_base_bdevs_operational": 3, 00:32:30.592 "base_bdevs_list": [ 00:32:30.592 { 00:32:30.592 "name": "pt1", 00:32:30.592 "uuid": "ea9080a5-1ccd-529d-8a54-c348b45aaa95", 00:32:30.592 "is_configured": true, 00:32:30.592 "data_offset": 2048, 00:32:30.592 "data_size": 63488 00:32:30.592 }, 00:32:30.592 { 00:32:30.592 "name": null, 00:32:30.592 "uuid": "5433f411-5adc-5591-8715-59dfd893a02a", 00:32:30.592 "is_configured": false, 00:32:30.592 "data_offset": 2048, 00:32:30.592 "data_size": 63488 00:32:30.592 }, 00:32:30.592 { 00:32:30.592 "name": null, 00:32:30.592 "uuid": "f3ac7872-140c-5686-a036-6aefdd07db16", 00:32:30.592 "is_configured": false, 00:32:30.592 "data_offset": 2048, 00:32:30.592 "data_size": 63488 00:32:30.592 } 00:32:30.592 ] 00:32:30.592 }' 00:32:30.592 19:25:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:30.592 19:25:46 -- common/autotest_common.sh@10 -- # set +x 00:32:31.162 19:25:47 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:32:31.162 19:25:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:32:31.162 19:25:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:31.422 [2024-04-18 19:25:47.317824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:31.422 [2024-04-18 19:25:47.317937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:31.422 [2024-04-18 19:25:47.317987] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:31.422 [2024-04-18 19:25:47.318032] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:31.422 [2024-04-18 19:25:47.318601] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:31.422 [2024-04-18 19:25:47.318680] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:31.422 [2024-04-18 19:25:47.318827] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:32:31.422 [2024-04-18 19:25:47.318864] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:31.422 pt2 00:32:31.422 19:25:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:32:31.422 19:25:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:32:31.422 19:25:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:31.681 [2024-04-18 19:25:47.569872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:31.681 [2024-04-18 19:25:47.569991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:31.681 [2024-04-18 19:25:47.570030] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:31.681 [2024-04-18 19:25:47.570067] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:31.681 [2024-04-18 19:25:47.570555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:31.681 [2024-04-18 19:25:47.570601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:31.681 [2024-04-18 19:25:47.570748] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:32:31.681 [2024-04-18 19:25:47.570780] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:31.681 [2024-04-18 19:25:47.570932] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:32:31.681 [2024-04-18 19:25:47.570950] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:31.681 [2024-04-18 19:25:47.571069] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:32:31.681 [2024-04-18 19:25:47.571394] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:32:31.681 [2024-04-18 19:25:47.571413] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:32:31.681 [2024-04-18 19:25:47.571563] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:31.681 pt3 00:32:31.681 19:25:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:32:31.681 19:25:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:32:31.681 19:25:47 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:32:31.681 19:25:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:32:31.681 19:25:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:31.681 19:25:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:32:31.681 19:25:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:31.681 19:25:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:31.681 19:25:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:31.682 19:25:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:31.682 19:25:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:31.682 19:25:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:31.682 19:25:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.682 19:25:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.941 19:25:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:31.941 "name": "raid_bdev1", 00:32:31.941 "uuid": "7f25c7e0-510f-4a3c-a99e-bd01934ed369", 00:32:31.941 "strip_size_kb": 64, 00:32:31.941 "state": "online", 00:32:31.941 "raid_level": "raid0", 00:32:31.941 "superblock": true, 00:32:31.941 "num_base_bdevs": 3, 00:32:31.941 "num_base_bdevs_discovered": 3, 00:32:31.941 "num_base_bdevs_operational": 3, 00:32:31.941 "base_bdevs_list": [ 00:32:31.941 { 00:32:31.941 "name": "pt1", 00:32:31.941 "uuid": "ea9080a5-1ccd-529d-8a54-c348b45aaa95", 00:32:31.941 "is_configured": true, 00:32:31.941 "data_offset": 2048, 00:32:31.941 "data_size": 63488 00:32:31.941 }, 00:32:31.941 { 00:32:31.941 "name": "pt2", 00:32:31.941 "uuid": "5433f411-5adc-5591-8715-59dfd893a02a", 00:32:31.941 "is_configured": true, 00:32:31.941 "data_offset": 2048, 00:32:31.941 "data_size": 63488 00:32:31.941 }, 00:32:31.941 { 00:32:31.941 "name": "pt3", 00:32:31.941 "uuid": "f3ac7872-140c-5686-a036-6aefdd07db16", 00:32:31.941 "is_configured": true, 00:32:31.941 "data_offset": 2048, 00:32:31.941 "data_size": 63488 00:32:31.941 } 00:32:31.941 ] 00:32:31.941 }' 00:32:31.941 19:25:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:31.941 19:25:47 -- common/autotest_common.sh@10 -- # set +x 00:32:32.511 19:25:48 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:32:32.511 19:25:48 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:33.076 [2024-04-18 19:25:48.710374] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:33.076 19:25:48 -- bdev/bdev_raid.sh@430 -- # '[' 7f25c7e0-510f-4a3c-a99e-bd01934ed369 '!=' 7f25c7e0-510f-4a3c-a99e-bd01934ed369 ']' 00:32:33.077 19:25:48 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:32:33.077 19:25:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:32:33.077 19:25:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:32:33.077 19:25:48 -- bdev/bdev_raid.sh@511 -- # killprocess 124942 00:32:33.077 19:25:48 -- common/autotest_common.sh@936 -- # '[' -z 124942 ']' 00:32:33.077 19:25:48 -- common/autotest_common.sh@940 -- # kill -0 124942 00:32:33.077 19:25:48 -- common/autotest_common.sh@941 -- # uname 00:32:33.077 19:25:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:33.077 19:25:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124942 00:32:33.077 killing process with pid 124942 00:32:33.077 19:25:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:33.077 19:25:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:33.077 19:25:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124942' 00:32:33.077 19:25:48 -- common/autotest_common.sh@955 -- # kill 124942 00:32:33.077 19:25:48 -- common/autotest_common.sh@960 -- # wait 124942 00:32:33.077 [2024-04-18 19:25:48.763779] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:33.077 [2024-04-18 19:25:48.763860] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:33.077 [2024-04-18 19:25:48.763915] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:33.077 [2024-04-18 19:25:48.763925] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:32:33.334 [2024-04-18 19:25:49.082532] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:34.708 ************************************ 00:32:34.708 END TEST raid_superblock_test 00:32:34.708 ************************************ 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@513 -- # return 0 00:32:34.708 00:32:34.708 real 0m12.663s 00:32:34.708 user 0m21.866s 00:32:34.708 sys 0m1.572s 00:32:34.708 19:25:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:34.708 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:32:34.708 19:25:50 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:32:34.708 19:25:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:34.708 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:32:34.708 ************************************ 00:32:34.708 START TEST raid_state_function_test 00:32:34.708 ************************************ 00:32:34.708 19:25:50 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 3 false 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=125293 00:32:34.708 Process raid pid: 125293 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125293' 00:32:34.708 19:25:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125293 /var/tmp/spdk-raid.sock 00:32:34.708 19:25:50 -- common/autotest_common.sh@817 -- # '[' -z 125293 ']' 00:32:34.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:34.708 19:25:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:34.708 19:25:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:34.708 19:25:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:34.708 19:25:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:34.708 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:32:34.965 [2024-04-18 19:25:50.653630] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:32:34.965 [2024-04-18 19:25:50.653785] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.965 [2024-04-18 19:25:50.820621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.223 [2024-04-18 19:25:51.059322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.479 [2024-04-18 19:25:51.290721] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:35.737 19:25:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:35.737 19:25:51 -- common/autotest_common.sh@850 -- # return 0 00:32:35.737 19:25:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:36.303 [2024-04-18 19:25:51.930983] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:36.303 [2024-04-18 19:25:51.931065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:36.303 [2024-04-18 19:25:51.931078] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:36.303 [2024-04-18 19:25:51.931099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:36.303 [2024-04-18 19:25:51.931106] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:36.303 [2024-04-18 19:25:51.931147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.303 19:25:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:36.303 19:25:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:36.303 "name": "Existed_Raid", 00:32:36.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.303 "strip_size_kb": 64, 00:32:36.303 "state": "configuring", 00:32:36.303 "raid_level": "concat", 00:32:36.303 "superblock": false, 00:32:36.303 "num_base_bdevs": 3, 00:32:36.303 "num_base_bdevs_discovered": 0, 00:32:36.303 "num_base_bdevs_operational": 3, 00:32:36.303 "base_bdevs_list": [ 00:32:36.303 { 00:32:36.303 "name": "BaseBdev1", 00:32:36.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.303 "is_configured": false, 00:32:36.303 "data_offset": 0, 00:32:36.303 "data_size": 0 00:32:36.303 }, 00:32:36.303 { 00:32:36.303 "name": "BaseBdev2", 00:32:36.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.303 "is_configured": false, 00:32:36.303 "data_offset": 0, 00:32:36.303 "data_size": 0 00:32:36.303 }, 00:32:36.303 { 00:32:36.303 "name": "BaseBdev3", 00:32:36.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.303 "is_configured": false, 00:32:36.303 "data_offset": 0, 00:32:36.303 "data_size": 0 00:32:36.303 } 00:32:36.303 ] 00:32:36.303 }' 00:32:36.303 19:25:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:36.303 19:25:52 -- common/autotest_common.sh@10 -- # set +x 00:32:36.869 19:25:52 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:37.434 [2024-04-18 19:25:53.059124] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:37.434 [2024-04-18 19:25:53.059166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:32:37.434 19:25:53 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:37.434 [2024-04-18 19:25:53.279176] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:37.434 [2024-04-18 19:25:53.279250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:37.434 [2024-04-18 19:25:53.279261] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:37.434 [2024-04-18 19:25:53.279287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:37.434 [2024-04-18 19:25:53.279296] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:37.434 [2024-04-18 19:25:53.279321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:37.434 19:25:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:37.691 [2024-04-18 19:25:53.591476] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:37.691 BaseBdev1 00:32:37.691 19:25:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:32:37.691 19:25:53 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:32:37.691 19:25:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:37.691 19:25:53 -- common/autotest_common.sh@887 -- # local i 00:32:37.691 19:25:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:37.691 19:25:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:37.691 19:25:53 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:38.256 19:25:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:38.256 [ 00:32:38.256 { 00:32:38.256 "name": "BaseBdev1", 00:32:38.256 "aliases": [ 00:32:38.256 "035e5fba-bb64-4293-9442-c8fd2bcd4543" 00:32:38.256 ], 00:32:38.256 "product_name": "Malloc disk", 00:32:38.256 "block_size": 512, 00:32:38.256 "num_blocks": 65536, 00:32:38.256 "uuid": "035e5fba-bb64-4293-9442-c8fd2bcd4543", 00:32:38.256 "assigned_rate_limits": { 00:32:38.256 "rw_ios_per_sec": 0, 00:32:38.256 "rw_mbytes_per_sec": 0, 00:32:38.256 "r_mbytes_per_sec": 0, 00:32:38.256 "w_mbytes_per_sec": 0 00:32:38.256 }, 00:32:38.256 "claimed": true, 00:32:38.256 "claim_type": "exclusive_write", 00:32:38.256 "zoned": false, 00:32:38.256 "supported_io_types": { 00:32:38.256 "read": true, 00:32:38.256 "write": true, 00:32:38.256 "unmap": true, 00:32:38.256 "write_zeroes": true, 00:32:38.256 "flush": true, 00:32:38.256 "reset": true, 00:32:38.256 "compare": false, 00:32:38.257 "compare_and_write": false, 00:32:38.257 "abort": true, 00:32:38.257 "nvme_admin": false, 00:32:38.257 "nvme_io": false 00:32:38.257 }, 00:32:38.257 "memory_domains": [ 00:32:38.257 { 00:32:38.257 "dma_device_id": "system", 00:32:38.257 "dma_device_type": 1 00:32:38.257 }, 00:32:38.257 { 00:32:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:38.257 "dma_device_type": 2 00:32:38.257 } 00:32:38.257 ], 00:32:38.257 "driver_specific": {} 00:32:38.257 } 00:32:38.257 ] 00:32:38.257 19:25:54 -- common/autotest_common.sh@893 -- # return 0 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.257 19:25:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.544 19:25:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:38.544 "name": "Existed_Raid", 00:32:38.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.544 "strip_size_kb": 64, 00:32:38.544 "state": "configuring", 00:32:38.544 "raid_level": "concat", 00:32:38.544 "superblock": false, 00:32:38.544 "num_base_bdevs": 3, 00:32:38.544 "num_base_bdevs_discovered": 1, 00:32:38.544 "num_base_bdevs_operational": 3, 00:32:38.544 "base_bdevs_list": [ 00:32:38.544 { 00:32:38.544 "name": "BaseBdev1", 00:32:38.544 "uuid": "035e5fba-bb64-4293-9442-c8fd2bcd4543", 00:32:38.544 "is_configured": true, 00:32:38.544 "data_offset": 0, 00:32:38.544 "data_size": 65536 00:32:38.544 }, 00:32:38.544 { 00:32:38.544 "name": "BaseBdev2", 00:32:38.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.544 "is_configured": false, 00:32:38.544 "data_offset": 0, 00:32:38.544 "data_size": 0 00:32:38.544 }, 00:32:38.545 { 00:32:38.545 "name": "BaseBdev3", 00:32:38.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.545 "is_configured": false, 00:32:38.545 "data_offset": 0, 00:32:38.545 "data_size": 0 00:32:38.545 } 00:32:38.545 ] 00:32:38.545 }' 00:32:38.545 19:25:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:38.545 19:25:54 -- common/autotest_common.sh@10 -- # set +x 00:32:39.480 19:25:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:39.480 [2024-04-18 19:25:55.371973] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:39.480 [2024-04-18 19:25:55.372031] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:32:39.480 19:25:55 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:32:39.480 19:25:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:39.741 [2024-04-18 19:25:55.660095] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:39.741 [2024-04-18 19:25:55.662255] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:39.741 [2024-04-18 19:25:55.662320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:39.741 [2024-04-18 19:25:55.662330] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:39.741 [2024-04-18 19:25:55.662356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:39.999 19:25:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:40.257 19:25:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:40.257 "name": "Existed_Raid", 00:32:40.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.257 "strip_size_kb": 64, 00:32:40.257 "state": "configuring", 00:32:40.257 "raid_level": "concat", 00:32:40.257 "superblock": false, 00:32:40.257 "num_base_bdevs": 3, 00:32:40.257 "num_base_bdevs_discovered": 1, 00:32:40.257 "num_base_bdevs_operational": 3, 00:32:40.257 "base_bdevs_list": [ 00:32:40.257 { 00:32:40.257 "name": "BaseBdev1", 00:32:40.257 "uuid": "035e5fba-bb64-4293-9442-c8fd2bcd4543", 00:32:40.257 "is_configured": true, 00:32:40.257 "data_offset": 0, 00:32:40.257 "data_size": 65536 00:32:40.257 }, 00:32:40.257 { 00:32:40.257 "name": "BaseBdev2", 00:32:40.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.257 "is_configured": false, 00:32:40.257 "data_offset": 0, 00:32:40.257 "data_size": 0 00:32:40.257 }, 00:32:40.257 { 00:32:40.257 "name": "BaseBdev3", 00:32:40.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.257 "is_configured": false, 00:32:40.257 "data_offset": 0, 00:32:40.257 "data_size": 0 00:32:40.257 } 00:32:40.257 ] 00:32:40.257 }' 00:32:40.257 19:25:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:40.257 19:25:55 -- common/autotest_common.sh@10 -- # set +x 00:32:40.823 19:25:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:41.081 [2024-04-18 19:25:56.909446] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:41.081 BaseBdev2 00:32:41.081 19:25:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:32:41.081 19:25:56 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:32:41.081 19:25:56 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:41.081 19:25:56 -- common/autotest_common.sh@887 -- # local i 00:32:41.081 19:25:56 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:41.081 19:25:56 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:41.081 19:25:56 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:41.339 19:25:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:41.597 [ 00:32:41.597 { 00:32:41.597 "name": "BaseBdev2", 00:32:41.597 "aliases": [ 00:32:41.597 "9b433e44-dba1-4a0f-ad18-96f0b44f13c2" 00:32:41.597 ], 00:32:41.597 "product_name": "Malloc disk", 00:32:41.597 "block_size": 512, 00:32:41.597 "num_blocks": 65536, 00:32:41.597 "uuid": "9b433e44-dba1-4a0f-ad18-96f0b44f13c2", 00:32:41.597 "assigned_rate_limits": { 00:32:41.597 "rw_ios_per_sec": 0, 00:32:41.597 "rw_mbytes_per_sec": 0, 00:32:41.597 "r_mbytes_per_sec": 0, 00:32:41.597 "w_mbytes_per_sec": 0 00:32:41.597 }, 00:32:41.597 "claimed": true, 00:32:41.597 "claim_type": "exclusive_write", 00:32:41.597 "zoned": false, 00:32:41.597 "supported_io_types": { 00:32:41.597 "read": true, 00:32:41.597 "write": true, 00:32:41.597 "unmap": true, 00:32:41.597 "write_zeroes": true, 00:32:41.597 "flush": true, 00:32:41.597 "reset": true, 00:32:41.597 "compare": false, 00:32:41.597 "compare_and_write": false, 00:32:41.597 "abort": true, 00:32:41.597 "nvme_admin": false, 00:32:41.597 "nvme_io": false 00:32:41.597 }, 00:32:41.597 "memory_domains": [ 00:32:41.597 { 00:32:41.597 "dma_device_id": "system", 00:32:41.597 "dma_device_type": 1 00:32:41.597 }, 00:32:41.597 { 00:32:41.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.597 "dma_device_type": 2 00:32:41.597 } 00:32:41.597 ], 00:32:41.597 "driver_specific": {} 00:32:41.597 } 00:32:41.597 ] 00:32:41.597 19:25:57 -- common/autotest_common.sh@893 -- # return 0 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.597 19:25:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.856 19:25:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:41.856 "name": "Existed_Raid", 00:32:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.856 "strip_size_kb": 64, 00:32:41.856 "state": "configuring", 00:32:41.856 "raid_level": "concat", 00:32:41.856 "superblock": false, 00:32:41.856 "num_base_bdevs": 3, 00:32:41.856 "num_base_bdevs_discovered": 2, 00:32:41.856 "num_base_bdevs_operational": 3, 00:32:41.856 "base_bdevs_list": [ 00:32:41.856 { 00:32:41.856 "name": "BaseBdev1", 00:32:41.856 "uuid": "035e5fba-bb64-4293-9442-c8fd2bcd4543", 00:32:41.856 "is_configured": true, 00:32:41.856 "data_offset": 0, 00:32:41.856 "data_size": 65536 00:32:41.856 }, 00:32:41.856 { 00:32:41.856 "name": "BaseBdev2", 00:32:41.856 "uuid": "9b433e44-dba1-4a0f-ad18-96f0b44f13c2", 00:32:41.856 "is_configured": true, 00:32:41.856 "data_offset": 0, 00:32:41.856 "data_size": 65536 00:32:41.856 }, 00:32:41.856 { 00:32:41.856 "name": "BaseBdev3", 00:32:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.856 "is_configured": false, 00:32:41.856 "data_offset": 0, 00:32:41.856 "data_size": 0 00:32:41.856 } 00:32:41.856 ] 00:32:41.856 }' 00:32:41.856 19:25:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:41.856 19:25:57 -- common/autotest_common.sh@10 -- # set +x 00:32:42.800 19:25:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:42.800 [2024-04-18 19:25:58.610209] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:42.800 [2024-04-18 19:25:58.610263] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:32:42.800 [2024-04-18 19:25:58.610273] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:42.800 [2024-04-18 19:25:58.610407] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:32:42.800 [2024-04-18 19:25:58.610747] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:32:42.800 [2024-04-18 19:25:58.610773] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:32:42.800 [2024-04-18 19:25:58.611008] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:42.800 BaseBdev3 00:32:42.800 19:25:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:32:42.800 19:25:58 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:32:42.800 19:25:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:42.800 19:25:58 -- common/autotest_common.sh@887 -- # local i 00:32:42.800 19:25:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:42.800 19:25:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:42.800 19:25:58 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:43.058 19:25:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:43.317 [ 00:32:43.317 { 00:32:43.317 "name": "BaseBdev3", 00:32:43.317 "aliases": [ 00:32:43.317 "a1e67561-5743-4e21-bb04-a561e2f537c8" 00:32:43.317 ], 00:32:43.317 "product_name": "Malloc disk", 00:32:43.317 "block_size": 512, 00:32:43.317 "num_blocks": 65536, 00:32:43.317 "uuid": "a1e67561-5743-4e21-bb04-a561e2f537c8", 00:32:43.317 "assigned_rate_limits": { 00:32:43.317 "rw_ios_per_sec": 0, 00:32:43.317 "rw_mbytes_per_sec": 0, 00:32:43.317 "r_mbytes_per_sec": 0, 00:32:43.317 "w_mbytes_per_sec": 0 00:32:43.317 }, 00:32:43.317 "claimed": true, 00:32:43.317 "claim_type": "exclusive_write", 00:32:43.317 "zoned": false, 00:32:43.317 "supported_io_types": { 00:32:43.317 "read": true, 00:32:43.317 "write": true, 00:32:43.317 "unmap": true, 00:32:43.317 "write_zeroes": true, 00:32:43.317 "flush": true, 00:32:43.317 "reset": true, 00:32:43.317 "compare": false, 00:32:43.317 "compare_and_write": false, 00:32:43.317 "abort": true, 00:32:43.317 "nvme_admin": false, 00:32:43.317 "nvme_io": false 00:32:43.317 }, 00:32:43.317 "memory_domains": [ 00:32:43.317 { 00:32:43.317 "dma_device_id": "system", 00:32:43.317 "dma_device_type": 1 00:32:43.317 }, 00:32:43.317 { 00:32:43.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:43.317 "dma_device_type": 2 00:32:43.317 } 00:32:43.317 ], 00:32:43.317 "driver_specific": {} 00:32:43.317 } 00:32:43.317 ] 00:32:43.317 19:25:59 -- common/autotest_common.sh@893 -- # return 0 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.317 19:25:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:43.575 19:25:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:43.575 "name": "Existed_Raid", 00:32:43.575 "uuid": "d652c247-083b-4525-a63b-ab1ad86b8aa2", 00:32:43.575 "strip_size_kb": 64, 00:32:43.575 "state": "online", 00:32:43.575 "raid_level": "concat", 00:32:43.575 "superblock": false, 00:32:43.575 "num_base_bdevs": 3, 00:32:43.575 "num_base_bdevs_discovered": 3, 00:32:43.575 "num_base_bdevs_operational": 3, 00:32:43.575 "base_bdevs_list": [ 00:32:43.575 { 00:32:43.575 "name": "BaseBdev1", 00:32:43.575 "uuid": "035e5fba-bb64-4293-9442-c8fd2bcd4543", 00:32:43.575 "is_configured": true, 00:32:43.575 "data_offset": 0, 00:32:43.575 "data_size": 65536 00:32:43.575 }, 00:32:43.575 { 00:32:43.575 "name": "BaseBdev2", 00:32:43.575 "uuid": "9b433e44-dba1-4a0f-ad18-96f0b44f13c2", 00:32:43.575 "is_configured": true, 00:32:43.575 "data_offset": 0, 00:32:43.575 "data_size": 65536 00:32:43.575 }, 00:32:43.575 { 00:32:43.575 "name": "BaseBdev3", 00:32:43.575 "uuid": "a1e67561-5743-4e21-bb04-a561e2f537c8", 00:32:43.575 "is_configured": true, 00:32:43.575 "data_offset": 0, 00:32:43.575 "data_size": 65536 00:32:43.575 } 00:32:43.575 ] 00:32:43.575 }' 00:32:43.575 19:25:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:43.575 19:25:59 -- common/autotest_common.sh@10 -- # set +x 00:32:44.571 19:26:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:44.571 [2024-04-18 19:26:00.370850] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:44.571 [2024-04-18 19:26:00.370895] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:44.571 [2024-04-18 19:26:00.370976] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:44.829 "name": "Existed_Raid", 00:32:44.829 "uuid": "d652c247-083b-4525-a63b-ab1ad86b8aa2", 00:32:44.829 "strip_size_kb": 64, 00:32:44.829 "state": "offline", 00:32:44.829 "raid_level": "concat", 00:32:44.829 "superblock": false, 00:32:44.829 "num_base_bdevs": 3, 00:32:44.829 "num_base_bdevs_discovered": 2, 00:32:44.829 "num_base_bdevs_operational": 2, 00:32:44.829 "base_bdevs_list": [ 00:32:44.829 { 00:32:44.829 "name": null, 00:32:44.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.829 "is_configured": false, 00:32:44.829 "data_offset": 0, 00:32:44.829 "data_size": 65536 00:32:44.829 }, 00:32:44.829 { 00:32:44.829 "name": "BaseBdev2", 00:32:44.829 "uuid": "9b433e44-dba1-4a0f-ad18-96f0b44f13c2", 00:32:44.829 "is_configured": true, 00:32:44.829 "data_offset": 0, 00:32:44.829 "data_size": 65536 00:32:44.829 }, 00:32:44.829 { 00:32:44.829 "name": "BaseBdev3", 00:32:44.829 "uuid": "a1e67561-5743-4e21-bb04-a561e2f537c8", 00:32:44.829 "is_configured": true, 00:32:44.829 "data_offset": 0, 00:32:44.829 "data_size": 65536 00:32:44.829 } 00:32:44.829 ] 00:32:44.829 }' 00:32:44.829 19:26:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:44.829 19:26:00 -- common/autotest_common.sh@10 -- # set +x 00:32:45.762 19:26:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:32:45.762 19:26:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:32:45.762 19:26:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.762 19:26:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:32:45.762 19:26:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:32:45.762 19:26:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:45.762 19:26:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:46.021 [2024-04-18 19:26:01.854120] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:46.280 19:26:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:32:46.280 19:26:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:32:46.280 19:26:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.280 19:26:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:32:46.538 19:26:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:32:46.538 19:26:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:46.538 19:26:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:46.798 [2024-04-18 19:26:02.603341] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:46.798 [2024-04-18 19:26:02.603417] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:32:47.074 19:26:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:32:47.074 19:26:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:32:47.074 19:26:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:32:47.074 19:26:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.332 19:26:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:32:47.332 19:26:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:32:47.332 19:26:03 -- bdev/bdev_raid.sh@287 -- # killprocess 125293 00:32:47.332 19:26:03 -- common/autotest_common.sh@936 -- # '[' -z 125293 ']' 00:32:47.332 19:26:03 -- common/autotest_common.sh@940 -- # kill -0 125293 00:32:47.332 19:26:03 -- common/autotest_common.sh@941 -- # uname 00:32:47.332 19:26:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:47.332 19:26:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125293 00:32:47.332 killing process with pid 125293 00:32:47.332 19:26:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:47.332 19:26:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:47.332 19:26:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125293' 00:32:47.332 19:26:03 -- common/autotest_common.sh@955 -- # kill 125293 00:32:47.332 19:26:03 -- common/autotest_common.sh@960 -- # wait 125293 00:32:47.332 [2024-04-18 19:26:03.126739] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:47.332 [2024-04-18 19:26:03.127128] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:48.705 ************************************ 00:32:48.705 END TEST raid_state_function_test 00:32:48.705 ************************************ 00:32:48.705 19:26:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:32:48.705 00:32:48.705 real 0m13.974s 00:32:48.705 user 0m24.263s 00:32:48.705 sys 0m1.750s 00:32:48.705 19:26:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:48.705 19:26:04 -- common/autotest_common.sh@10 -- # set +x 00:32:48.705 19:26:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:32:48.705 19:26:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:32:48.705 19:26:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:48.705 19:26:04 -- common/autotest_common.sh@10 -- # set +x 00:32:48.963 ************************************ 00:32:48.963 START TEST raid_state_function_test_sb 00:32:48.963 ************************************ 00:32:48.963 19:26:04 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 3 true 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=125732 00:32:48.963 Process raid pid: 125732 00:32:48.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125732' 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125732 /var/tmp/spdk-raid.sock 00:32:48.963 19:26:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:48.963 19:26:04 -- common/autotest_common.sh@817 -- # '[' -z 125732 ']' 00:32:48.963 19:26:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:48.963 19:26:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:48.963 19:26:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:48.963 19:26:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:48.963 19:26:04 -- common/autotest_common.sh@10 -- # set +x 00:32:48.963 [2024-04-18 19:26:04.740144] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:32:48.963 [2024-04-18 19:26:04.740479] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:49.221 [2024-04-18 19:26:04.928362] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.478 [2024-04-18 19:26:05.206380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.735 [2024-04-18 19:26:05.485181] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:49.735 19:26:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:49.735 19:26:05 -- common/autotest_common.sh@850 -- # return 0 00:32:49.735 19:26:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:49.992 [2024-04-18 19:26:05.864215] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:49.992 [2024-04-18 19:26:05.864479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:49.992 [2024-04-18 19:26:05.864624] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:49.992 [2024-04-18 19:26:05.864681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:49.992 [2024-04-18 19:26:05.864805] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:49.992 [2024-04-18 19:26:05.864893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.992 19:26:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:50.249 19:26:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:50.249 "name": "Existed_Raid", 00:32:50.249 "uuid": "46ace7a3-80ac-45b2-9787-d38b2b33f8bd", 00:32:50.249 "strip_size_kb": 64, 00:32:50.249 "state": "configuring", 00:32:50.249 "raid_level": "concat", 00:32:50.249 "superblock": true, 00:32:50.249 "num_base_bdevs": 3, 00:32:50.249 "num_base_bdevs_discovered": 0, 00:32:50.249 "num_base_bdevs_operational": 3, 00:32:50.249 "base_bdevs_list": [ 00:32:50.249 { 00:32:50.249 "name": "BaseBdev1", 00:32:50.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.249 "is_configured": false, 00:32:50.249 "data_offset": 0, 00:32:50.249 "data_size": 0 00:32:50.249 }, 00:32:50.249 { 00:32:50.249 "name": "BaseBdev2", 00:32:50.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.249 "is_configured": false, 00:32:50.249 "data_offset": 0, 00:32:50.249 "data_size": 0 00:32:50.249 }, 00:32:50.249 { 00:32:50.249 "name": "BaseBdev3", 00:32:50.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.249 "is_configured": false, 00:32:50.249 "data_offset": 0, 00:32:50.249 "data_size": 0 00:32:50.249 } 00:32:50.249 ] 00:32:50.249 }' 00:32:50.249 19:26:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:50.249 19:26:06 -- common/autotest_common.sh@10 -- # set +x 00:32:51.182 19:26:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:51.440 [2024-04-18 19:26:07.164296] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:51.440 [2024-04-18 19:26:07.164510] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:32:51.440 19:26:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:51.698 [2024-04-18 19:26:07.408407] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:51.698 [2024-04-18 19:26:07.408608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:51.698 [2024-04-18 19:26:07.408712] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:51.698 [2024-04-18 19:26:07.408772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:51.698 [2024-04-18 19:26:07.408801] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:51.698 [2024-04-18 19:26:07.408911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:51.698 19:26:07 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:51.956 [2024-04-18 19:26:07.690136] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:51.956 BaseBdev1 00:32:51.956 19:26:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:32:51.956 19:26:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:32:51.956 19:26:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:51.956 19:26:07 -- common/autotest_common.sh@887 -- # local i 00:32:51.956 19:26:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:51.956 19:26:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:51.956 19:26:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:52.214 19:26:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:52.471 [ 00:32:52.471 { 00:32:52.471 "name": "BaseBdev1", 00:32:52.471 "aliases": [ 00:32:52.471 "9b7709ad-7e44-42d7-ab95-b6504be03507" 00:32:52.471 ], 00:32:52.471 "product_name": "Malloc disk", 00:32:52.471 "block_size": 512, 00:32:52.471 "num_blocks": 65536, 00:32:52.471 "uuid": "9b7709ad-7e44-42d7-ab95-b6504be03507", 00:32:52.471 "assigned_rate_limits": { 00:32:52.471 "rw_ios_per_sec": 0, 00:32:52.471 "rw_mbytes_per_sec": 0, 00:32:52.471 "r_mbytes_per_sec": 0, 00:32:52.471 "w_mbytes_per_sec": 0 00:32:52.471 }, 00:32:52.471 "claimed": true, 00:32:52.471 "claim_type": "exclusive_write", 00:32:52.471 "zoned": false, 00:32:52.471 "supported_io_types": { 00:32:52.471 "read": true, 00:32:52.471 "write": true, 00:32:52.471 "unmap": true, 00:32:52.471 "write_zeroes": true, 00:32:52.471 "flush": true, 00:32:52.471 "reset": true, 00:32:52.471 "compare": false, 00:32:52.471 "compare_and_write": false, 00:32:52.471 "abort": true, 00:32:52.471 "nvme_admin": false, 00:32:52.471 "nvme_io": false 00:32:52.471 }, 00:32:52.471 "memory_domains": [ 00:32:52.471 { 00:32:52.471 "dma_device_id": "system", 00:32:52.471 "dma_device_type": 1 00:32:52.471 }, 00:32:52.471 { 00:32:52.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:52.471 "dma_device_type": 2 00:32:52.471 } 00:32:52.471 ], 00:32:52.471 "driver_specific": {} 00:32:52.471 } 00:32:52.471 ] 00:32:52.472 19:26:08 -- common/autotest_common.sh@893 -- # return 0 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.472 19:26:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:52.729 19:26:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:52.729 "name": "Existed_Raid", 00:32:52.729 "uuid": "8306ca82-1dce-40f8-b785-6a470bb9c4d8", 00:32:52.729 "strip_size_kb": 64, 00:32:52.729 "state": "configuring", 00:32:52.729 "raid_level": "concat", 00:32:52.729 "superblock": true, 00:32:52.729 "num_base_bdevs": 3, 00:32:52.729 "num_base_bdevs_discovered": 1, 00:32:52.729 "num_base_bdevs_operational": 3, 00:32:52.729 "base_bdevs_list": [ 00:32:52.729 { 00:32:52.729 "name": "BaseBdev1", 00:32:52.729 "uuid": "9b7709ad-7e44-42d7-ab95-b6504be03507", 00:32:52.729 "is_configured": true, 00:32:52.729 "data_offset": 2048, 00:32:52.729 "data_size": 63488 00:32:52.729 }, 00:32:52.729 { 00:32:52.729 "name": "BaseBdev2", 00:32:52.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.729 "is_configured": false, 00:32:52.729 "data_offset": 0, 00:32:52.729 "data_size": 0 00:32:52.729 }, 00:32:52.729 { 00:32:52.729 "name": "BaseBdev3", 00:32:52.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.729 "is_configured": false, 00:32:52.729 "data_offset": 0, 00:32:52.729 "data_size": 0 00:32:52.729 } 00:32:52.729 ] 00:32:52.729 }' 00:32:52.729 19:26:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:52.729 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:32:53.293 19:26:09 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:53.550 [2024-04-18 19:26:09.434750] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:53.550 [2024-04-18 19:26:09.435007] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:32:53.550 19:26:09 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:32:53.551 19:26:09 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:54.117 19:26:09 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:54.117 BaseBdev1 00:32:54.117 19:26:10 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:32:54.117 19:26:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:32:54.117 19:26:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:54.117 19:26:10 -- common/autotest_common.sh@887 -- # local i 00:32:54.117 19:26:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:54.117 19:26:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:54.117 19:26:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:54.375 19:26:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:54.632 [ 00:32:54.632 { 00:32:54.632 "name": "BaseBdev1", 00:32:54.632 "aliases": [ 00:32:54.632 "8ed26132-6b24-4996-8ec8-4bbe3b60621c" 00:32:54.632 ], 00:32:54.632 "product_name": "Malloc disk", 00:32:54.632 "block_size": 512, 00:32:54.633 "num_blocks": 65536, 00:32:54.633 "uuid": "8ed26132-6b24-4996-8ec8-4bbe3b60621c", 00:32:54.633 "assigned_rate_limits": { 00:32:54.633 "rw_ios_per_sec": 0, 00:32:54.633 "rw_mbytes_per_sec": 0, 00:32:54.633 "r_mbytes_per_sec": 0, 00:32:54.633 "w_mbytes_per_sec": 0 00:32:54.633 }, 00:32:54.633 "claimed": false, 00:32:54.633 "zoned": false, 00:32:54.633 "supported_io_types": { 00:32:54.633 "read": true, 00:32:54.633 "write": true, 00:32:54.633 "unmap": true, 00:32:54.633 "write_zeroes": true, 00:32:54.633 "flush": true, 00:32:54.633 "reset": true, 00:32:54.633 "compare": false, 00:32:54.633 "compare_and_write": false, 00:32:54.633 "abort": true, 00:32:54.633 "nvme_admin": false, 00:32:54.633 "nvme_io": false 00:32:54.633 }, 00:32:54.633 "memory_domains": [ 00:32:54.633 { 00:32:54.633 "dma_device_id": "system", 00:32:54.633 "dma_device_type": 1 00:32:54.633 }, 00:32:54.633 { 00:32:54.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:54.633 "dma_device_type": 2 00:32:54.633 } 00:32:54.633 ], 00:32:54.633 "driver_specific": {} 00:32:54.633 } 00:32:54.633 ] 00:32:54.633 19:26:10 -- common/autotest_common.sh@893 -- # return 0 00:32:54.633 19:26:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:55.199 [2024-04-18 19:26:10.837887] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:55.199 [2024-04-18 19:26:10.840100] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:55.199 [2024-04-18 19:26:10.840170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:55.199 [2024-04-18 19:26:10.840181] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:55.199 [2024-04-18 19:26:10.840206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.199 19:26:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:55.456 19:26:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:55.456 "name": "Existed_Raid", 00:32:55.456 "uuid": "87252c61-9ce1-4200-9359-4b29047beb13", 00:32:55.456 "strip_size_kb": 64, 00:32:55.456 "state": "configuring", 00:32:55.456 "raid_level": "concat", 00:32:55.456 "superblock": true, 00:32:55.456 "num_base_bdevs": 3, 00:32:55.456 "num_base_bdevs_discovered": 1, 00:32:55.456 "num_base_bdevs_operational": 3, 00:32:55.456 "base_bdevs_list": [ 00:32:55.456 { 00:32:55.456 "name": "BaseBdev1", 00:32:55.456 "uuid": "8ed26132-6b24-4996-8ec8-4bbe3b60621c", 00:32:55.456 "is_configured": true, 00:32:55.456 "data_offset": 2048, 00:32:55.457 "data_size": 63488 00:32:55.457 }, 00:32:55.457 { 00:32:55.457 "name": "BaseBdev2", 00:32:55.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.457 "is_configured": false, 00:32:55.457 "data_offset": 0, 00:32:55.457 "data_size": 0 00:32:55.457 }, 00:32:55.457 { 00:32:55.457 "name": "BaseBdev3", 00:32:55.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.457 "is_configured": false, 00:32:55.457 "data_offset": 0, 00:32:55.457 "data_size": 0 00:32:55.457 } 00:32:55.457 ] 00:32:55.457 }' 00:32:55.457 19:26:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:55.457 19:26:11 -- common/autotest_common.sh@10 -- # set +x 00:32:56.021 19:26:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:56.587 [2024-04-18 19:26:12.266587] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:56.587 BaseBdev2 00:32:56.587 19:26:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:32:56.587 19:26:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:32:56.587 19:26:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:56.587 19:26:12 -- common/autotest_common.sh@887 -- # local i 00:32:56.587 19:26:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:56.587 19:26:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:56.587 19:26:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:56.845 19:26:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:57.103 [ 00:32:57.103 { 00:32:57.103 "name": "BaseBdev2", 00:32:57.103 "aliases": [ 00:32:57.103 "09883d73-d68a-4273-9c6f-653192150d1b" 00:32:57.103 ], 00:32:57.103 "product_name": "Malloc disk", 00:32:57.103 "block_size": 512, 00:32:57.103 "num_blocks": 65536, 00:32:57.103 "uuid": "09883d73-d68a-4273-9c6f-653192150d1b", 00:32:57.103 "assigned_rate_limits": { 00:32:57.103 "rw_ios_per_sec": 0, 00:32:57.103 "rw_mbytes_per_sec": 0, 00:32:57.103 "r_mbytes_per_sec": 0, 00:32:57.103 "w_mbytes_per_sec": 0 00:32:57.103 }, 00:32:57.103 "claimed": true, 00:32:57.103 "claim_type": "exclusive_write", 00:32:57.103 "zoned": false, 00:32:57.103 "supported_io_types": { 00:32:57.103 "read": true, 00:32:57.103 "write": true, 00:32:57.103 "unmap": true, 00:32:57.103 "write_zeroes": true, 00:32:57.103 "flush": true, 00:32:57.103 "reset": true, 00:32:57.103 "compare": false, 00:32:57.103 "compare_and_write": false, 00:32:57.103 "abort": true, 00:32:57.103 "nvme_admin": false, 00:32:57.103 "nvme_io": false 00:32:57.103 }, 00:32:57.103 "memory_domains": [ 00:32:57.103 { 00:32:57.103 "dma_device_id": "system", 00:32:57.103 "dma_device_type": 1 00:32:57.103 }, 00:32:57.103 { 00:32:57.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:57.103 "dma_device_type": 2 00:32:57.103 } 00:32:57.103 ], 00:32:57.103 "driver_specific": {} 00:32:57.103 } 00:32:57.103 ] 00:32:57.103 19:26:12 -- common/autotest_common.sh@893 -- # return 0 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.103 19:26:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:57.361 19:26:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:57.361 "name": "Existed_Raid", 00:32:57.361 "uuid": "87252c61-9ce1-4200-9359-4b29047beb13", 00:32:57.361 "strip_size_kb": 64, 00:32:57.361 "state": "configuring", 00:32:57.361 "raid_level": "concat", 00:32:57.361 "superblock": true, 00:32:57.361 "num_base_bdevs": 3, 00:32:57.361 "num_base_bdevs_discovered": 2, 00:32:57.361 "num_base_bdevs_operational": 3, 00:32:57.361 "base_bdevs_list": [ 00:32:57.361 { 00:32:57.361 "name": "BaseBdev1", 00:32:57.361 "uuid": "8ed26132-6b24-4996-8ec8-4bbe3b60621c", 00:32:57.361 "is_configured": true, 00:32:57.361 "data_offset": 2048, 00:32:57.361 "data_size": 63488 00:32:57.361 }, 00:32:57.361 { 00:32:57.361 "name": "BaseBdev2", 00:32:57.361 "uuid": "09883d73-d68a-4273-9c6f-653192150d1b", 00:32:57.361 "is_configured": true, 00:32:57.361 "data_offset": 2048, 00:32:57.361 "data_size": 63488 00:32:57.361 }, 00:32:57.361 { 00:32:57.361 "name": "BaseBdev3", 00:32:57.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:57.361 "is_configured": false, 00:32:57.361 "data_offset": 0, 00:32:57.361 "data_size": 0 00:32:57.361 } 00:32:57.361 ] 00:32:57.361 }' 00:32:57.361 19:26:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:57.361 19:26:13 -- common/autotest_common.sh@10 -- # set +x 00:32:57.926 19:26:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:58.183 [2024-04-18 19:26:14.067007] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:58.183 [2024-04-18 19:26:14.067239] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:32:58.183 [2024-04-18 19:26:14.067255] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:58.183 [2024-04-18 19:26:14.067416] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:32:58.183 BaseBdev3 00:32:58.183 [2024-04-18 19:26:14.067754] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:32:58.183 [2024-04-18 19:26:14.067774] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:32:58.183 [2024-04-18 19:26:14.067938] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:58.183 19:26:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:32:58.183 19:26:14 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:32:58.183 19:26:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:32:58.183 19:26:14 -- common/autotest_common.sh@887 -- # local i 00:32:58.183 19:26:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:32:58.183 19:26:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:32:58.183 19:26:14 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:58.441 19:26:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:58.698 [ 00:32:58.698 { 00:32:58.698 "name": "BaseBdev3", 00:32:58.698 "aliases": [ 00:32:58.698 "2fc404a6-e531-49d4-b23c-c7081dcdf71a" 00:32:58.698 ], 00:32:58.698 "product_name": "Malloc disk", 00:32:58.698 "block_size": 512, 00:32:58.698 "num_blocks": 65536, 00:32:58.698 "uuid": "2fc404a6-e531-49d4-b23c-c7081dcdf71a", 00:32:58.698 "assigned_rate_limits": { 00:32:58.698 "rw_ios_per_sec": 0, 00:32:58.698 "rw_mbytes_per_sec": 0, 00:32:58.698 "r_mbytes_per_sec": 0, 00:32:58.698 "w_mbytes_per_sec": 0 00:32:58.698 }, 00:32:58.698 "claimed": true, 00:32:58.698 "claim_type": "exclusive_write", 00:32:58.698 "zoned": false, 00:32:58.698 "supported_io_types": { 00:32:58.698 "read": true, 00:32:58.698 "write": true, 00:32:58.698 "unmap": true, 00:32:58.698 "write_zeroes": true, 00:32:58.698 "flush": true, 00:32:58.698 "reset": true, 00:32:58.699 "compare": false, 00:32:58.699 "compare_and_write": false, 00:32:58.699 "abort": true, 00:32:58.699 "nvme_admin": false, 00:32:58.699 "nvme_io": false 00:32:58.699 }, 00:32:58.699 "memory_domains": [ 00:32:58.699 { 00:32:58.699 "dma_device_id": "system", 00:32:58.699 "dma_device_type": 1 00:32:58.699 }, 00:32:58.699 { 00:32:58.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:58.699 "dma_device_type": 2 00:32:58.699 } 00:32:58.699 ], 00:32:58.699 "driver_specific": {} 00:32:58.699 } 00:32:58.699 ] 00:32:58.699 19:26:14 -- common/autotest_common.sh@893 -- # return 0 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.699 19:26:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:58.956 19:26:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:32:58.956 "name": "Existed_Raid", 00:32:58.956 "uuid": "87252c61-9ce1-4200-9359-4b29047beb13", 00:32:58.956 "strip_size_kb": 64, 00:32:58.956 "state": "online", 00:32:58.956 "raid_level": "concat", 00:32:58.956 "superblock": true, 00:32:58.956 "num_base_bdevs": 3, 00:32:58.956 "num_base_bdevs_discovered": 3, 00:32:58.956 "num_base_bdevs_operational": 3, 00:32:58.956 "base_bdevs_list": [ 00:32:58.956 { 00:32:58.956 "name": "BaseBdev1", 00:32:58.956 "uuid": "8ed26132-6b24-4996-8ec8-4bbe3b60621c", 00:32:58.956 "is_configured": true, 00:32:58.956 "data_offset": 2048, 00:32:58.956 "data_size": 63488 00:32:58.956 }, 00:32:58.956 { 00:32:58.956 "name": "BaseBdev2", 00:32:58.956 "uuid": "09883d73-d68a-4273-9c6f-653192150d1b", 00:32:58.956 "is_configured": true, 00:32:58.956 "data_offset": 2048, 00:32:58.956 "data_size": 63488 00:32:58.956 }, 00:32:58.956 { 00:32:58.956 "name": "BaseBdev3", 00:32:58.956 "uuid": "2fc404a6-e531-49d4-b23c-c7081dcdf71a", 00:32:58.956 "is_configured": true, 00:32:58.956 "data_offset": 2048, 00:32:58.956 "data_size": 63488 00:32:58.956 } 00:32:58.956 ] 00:32:58.956 }' 00:32:58.956 19:26:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:32:58.956 19:26:14 -- common/autotest_common.sh@10 -- # set +x 00:32:59.567 19:26:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:59.844 [2024-04-18 19:26:15.659492] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:59.844 [2024-04-18 19:26:15.659532] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:59.844 [2024-04-18 19:26:15.659584] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.101 19:26:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:00.359 19:26:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:00.359 "name": "Existed_Raid", 00:33:00.359 "uuid": "87252c61-9ce1-4200-9359-4b29047beb13", 00:33:00.359 "strip_size_kb": 64, 00:33:00.359 "state": "offline", 00:33:00.359 "raid_level": "concat", 00:33:00.359 "superblock": true, 00:33:00.359 "num_base_bdevs": 3, 00:33:00.359 "num_base_bdevs_discovered": 2, 00:33:00.359 "num_base_bdevs_operational": 2, 00:33:00.359 "base_bdevs_list": [ 00:33:00.359 { 00:33:00.359 "name": null, 00:33:00.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.359 "is_configured": false, 00:33:00.359 "data_offset": 2048, 00:33:00.359 "data_size": 63488 00:33:00.359 }, 00:33:00.359 { 00:33:00.359 "name": "BaseBdev2", 00:33:00.359 "uuid": "09883d73-d68a-4273-9c6f-653192150d1b", 00:33:00.359 "is_configured": true, 00:33:00.359 "data_offset": 2048, 00:33:00.359 "data_size": 63488 00:33:00.359 }, 00:33:00.359 { 00:33:00.359 "name": "BaseBdev3", 00:33:00.359 "uuid": "2fc404a6-e531-49d4-b23c-c7081dcdf71a", 00:33:00.359 "is_configured": true, 00:33:00.359 "data_offset": 2048, 00:33:00.359 "data_size": 63488 00:33:00.359 } 00:33:00.359 ] 00:33:00.359 }' 00:33:00.359 19:26:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:00.359 19:26:16 -- common/autotest_common.sh@10 -- # set +x 00:33:00.924 19:26:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:33:00.924 19:26:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:33:00.924 19:26:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.924 19:26:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:33:01.181 19:26:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:33:01.181 19:26:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:01.181 19:26:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:01.439 [2024-04-18 19:26:17.269261] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:01.698 19:26:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:33:01.698 19:26:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:33:01.698 19:26:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:33:01.698 19:26:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.957 19:26:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:33:01.957 19:26:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:01.957 19:26:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:02.215 [2024-04-18 19:26:17.981249] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:02.215 [2024-04-18 19:26:17.981319] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:33:02.215 19:26:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:33:02.215 19:26:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:33:02.215 19:26:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.215 19:26:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:33:02.781 19:26:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:33:02.781 19:26:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:33:02.781 19:26:18 -- bdev/bdev_raid.sh@287 -- # killprocess 125732 00:33:02.781 19:26:18 -- common/autotest_common.sh@936 -- # '[' -z 125732 ']' 00:33:02.781 19:26:18 -- common/autotest_common.sh@940 -- # kill -0 125732 00:33:02.781 19:26:18 -- common/autotest_common.sh@941 -- # uname 00:33:02.781 19:26:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:02.781 19:26:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125732 00:33:02.781 killing process with pid 125732 00:33:02.781 19:26:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:02.781 19:26:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:02.781 19:26:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125732' 00:33:02.781 19:26:18 -- common/autotest_common.sh@955 -- # kill 125732 00:33:02.781 19:26:18 -- common/autotest_common.sh@960 -- # wait 125732 00:33:02.781 [2024-04-18 19:26:18.487942] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:02.781 [2024-04-18 19:26:18.488058] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:04.155 ************************************ 00:33:04.155 END TEST raid_state_function_test_sb 00:33:04.155 ************************************ 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:33:04.155 00:33:04.155 real 0m15.227s 00:33:04.155 user 0m26.556s 00:33:04.155 sys 0m1.913s 00:33:04.155 19:26:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:04.155 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:33:04.155 19:26:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:33:04.155 19:26:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:04.155 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:33:04.155 ************************************ 00:33:04.155 START TEST raid_superblock_test 00:33:04.155 ************************************ 00:33:04.155 19:26:19 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 3 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@357 -- # raid_pid=126165 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126165 /var/tmp/spdk-raid.sock 00:33:04.155 19:26:19 -- common/autotest_common.sh@817 -- # '[' -z 126165 ']' 00:33:04.155 19:26:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:04.155 19:26:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:04.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:04.155 19:26:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:04.155 19:26:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:04.155 19:26:19 -- common/autotest_common.sh@10 -- # set +x 00:33:04.155 19:26:19 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:04.155 [2024-04-18 19:26:20.052059] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:33:04.155 [2024-04-18 19:26:20.052448] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126165 ] 00:33:04.414 [2024-04-18 19:26:20.230734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.672 [2024-04-18 19:26:20.476338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.930 [2024-04-18 19:26:20.695181] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:05.188 19:26:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:05.188 19:26:21 -- common/autotest_common.sh@850 -- # return 0 00:33:05.188 19:26:21 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:33:05.188 19:26:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:33:05.188 19:26:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:33:05.188 19:26:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:33:05.188 19:26:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:05.188 19:26:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:05.188 19:26:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:33:05.188 19:26:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:05.188 19:26:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:33:05.446 malloc1 00:33:05.446 19:26:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:05.704 [2024-04-18 19:26:21.624054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:05.704 [2024-04-18 19:26:21.624150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.704 [2024-04-18 19:26:21.624183] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:33:05.704 [2024-04-18 19:26:21.624231] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.704 [2024-04-18 19:26:21.626798] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.704 [2024-04-18 19:26:21.626850] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:05.704 pt1 00:33:05.962 19:26:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:33:05.962 19:26:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:33:05.962 19:26:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:33:05.962 19:26:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:33:05.962 19:26:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:05.962 19:26:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:05.962 19:26:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:33:05.962 19:26:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:05.962 19:26:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:33:06.220 malloc2 00:33:06.220 19:26:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:06.478 [2024-04-18 19:26:22.182701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:06.478 [2024-04-18 19:26:22.182803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:06.478 [2024-04-18 19:26:22.182849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:33:06.478 [2024-04-18 19:26:22.182909] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:06.478 [2024-04-18 19:26:22.185444] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:06.478 [2024-04-18 19:26:22.185502] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:06.478 pt2 00:33:06.478 19:26:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:33:06.478 19:26:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:33:06.478 19:26:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:33:06.478 19:26:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:33:06.478 19:26:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:33:06.478 19:26:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:06.478 19:26:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:33:06.478 19:26:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:06.478 19:26:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:33:06.736 malloc3 00:33:06.736 19:26:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:06.994 [2024-04-18 19:26:22.700502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:06.994 [2024-04-18 19:26:22.700590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:06.994 [2024-04-18 19:26:22.700629] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:06.994 [2024-04-18 19:26:22.700671] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:06.994 [2024-04-18 19:26:22.703171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:06.994 [2024-04-18 19:26:22.703236] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:06.994 pt3 00:33:06.994 19:26:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:33:06.994 19:26:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:33:06.994 19:26:22 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:33:07.252 [2024-04-18 19:26:23.008603] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:07.252 [2024-04-18 19:26:23.010769] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:07.252 [2024-04-18 19:26:23.010853] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:07.252 [2024-04-18 19:26:23.011060] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:33:07.252 [2024-04-18 19:26:23.011083] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:07.252 [2024-04-18 19:26:23.011243] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:33:07.252 [2024-04-18 19:26:23.011626] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:33:07.252 [2024-04-18 19:26:23.011645] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:33:07.252 [2024-04-18 19:26:23.011807] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.252 19:26:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.510 19:26:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:07.510 "name": "raid_bdev1", 00:33:07.510 "uuid": "7d4d8a38-7905-4609-a0d5-4a2e45f808b6", 00:33:07.510 "strip_size_kb": 64, 00:33:07.510 "state": "online", 00:33:07.510 "raid_level": "concat", 00:33:07.510 "superblock": true, 00:33:07.510 "num_base_bdevs": 3, 00:33:07.510 "num_base_bdevs_discovered": 3, 00:33:07.510 "num_base_bdevs_operational": 3, 00:33:07.510 "base_bdevs_list": [ 00:33:07.510 { 00:33:07.510 "name": "pt1", 00:33:07.510 "uuid": "689f1e95-35fb-5e39-bd87-c6c46218bc0a", 00:33:07.510 "is_configured": true, 00:33:07.510 "data_offset": 2048, 00:33:07.510 "data_size": 63488 00:33:07.510 }, 00:33:07.510 { 00:33:07.510 "name": "pt2", 00:33:07.510 "uuid": "fc6704e9-e63d-5162-9cea-4b5374187ad4", 00:33:07.510 "is_configured": true, 00:33:07.510 "data_offset": 2048, 00:33:07.510 "data_size": 63488 00:33:07.510 }, 00:33:07.510 { 00:33:07.510 "name": "pt3", 00:33:07.510 "uuid": "d3eac037-3405-5e7a-9543-c28131bdb77f", 00:33:07.510 "is_configured": true, 00:33:07.510 "data_offset": 2048, 00:33:07.510 "data_size": 63488 00:33:07.510 } 00:33:07.510 ] 00:33:07.510 }' 00:33:07.510 19:26:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:07.510 19:26:23 -- common/autotest_common.sh@10 -- # set +x 00:33:08.443 19:26:24 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:08.443 19:26:24 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:33:08.702 [2024-04-18 19:26:24.377143] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:08.702 19:26:24 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7d4d8a38-7905-4609-a0d5-4a2e45f808b6 00:33:08.702 19:26:24 -- bdev/bdev_raid.sh@380 -- # '[' -z 7d4d8a38-7905-4609-a0d5-4a2e45f808b6 ']' 00:33:08.702 19:26:24 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:08.960 [2024-04-18 19:26:24.660936] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:08.960 [2024-04-18 19:26:24.660978] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:08.960 [2024-04-18 19:26:24.661064] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:08.960 [2024-04-18 19:26:24.661128] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:08.960 [2024-04-18 19:26:24.661139] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:33:08.960 19:26:24 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:33:08.960 19:26:24 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.218 19:26:24 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:33:09.218 19:26:24 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:33:09.218 19:26:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:33:09.218 19:26:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:09.475 19:26:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:33:09.475 19:26:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:09.734 19:26:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:33:09.734 19:26:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:09.734 19:26:25 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:09.734 19:26:25 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:09.993 19:26:25 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:33:09.993 19:26:25 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:33:09.993 19:26:25 -- common/autotest_common.sh@638 -- # local es=0 00:33:09.993 19:26:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:33:09.993 19:26:25 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:09.993 19:26:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:09.993 19:26:25 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:09.993 19:26:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:09.993 19:26:25 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:09.993 19:26:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:09.993 19:26:25 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:09.993 19:26:25 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:09.993 19:26:25 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:33:10.251 [2024-04-18 19:26:26.161306] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:10.251 [2024-04-18 19:26:26.163695] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:10.251 [2024-04-18 19:26:26.163760] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:33:10.251 [2024-04-18 19:26:26.163819] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:33:10.251 [2024-04-18 19:26:26.163906] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:33:10.251 [2024-04-18 19:26:26.163945] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:33:10.251 [2024-04-18 19:26:26.163998] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:10.251 [2024-04-18 19:26:26.164014] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:33:10.251 request: 00:33:10.251 { 00:33:10.251 "name": "raid_bdev1", 00:33:10.251 "raid_level": "concat", 00:33:10.251 "base_bdevs": [ 00:33:10.251 "malloc1", 00:33:10.251 "malloc2", 00:33:10.251 "malloc3" 00:33:10.251 ], 00:33:10.251 "superblock": false, 00:33:10.251 "strip_size_kb": 64, 00:33:10.251 "method": "bdev_raid_create", 00:33:10.251 "req_id": 1 00:33:10.251 } 00:33:10.251 Got JSON-RPC error response 00:33:10.251 response: 00:33:10.251 { 00:33:10.251 "code": -17, 00:33:10.251 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:10.251 } 00:33:10.509 19:26:26 -- common/autotest_common.sh@641 -- # es=1 00:33:10.509 19:26:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:10.509 19:26:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:10.509 19:26:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:10.509 19:26:26 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:10.509 19:26:26 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:33:10.767 19:26:26 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:33:10.767 19:26:26 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:33:10.767 19:26:26 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:11.025 [2024-04-18 19:26:26.785313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:11.025 [2024-04-18 19:26:26.785409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:11.025 [2024-04-18 19:26:26.785448] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:11.025 [2024-04-18 19:26:26.785469] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:11.025 [2024-04-18 19:26:26.788019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:11.025 [2024-04-18 19:26:26.788077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:11.025 [2024-04-18 19:26:26.788218] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:33:11.025 [2024-04-18 19:26:26.788279] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:11.025 pt1 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.026 19:26:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.284 19:26:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:11.284 "name": "raid_bdev1", 00:33:11.284 "uuid": "7d4d8a38-7905-4609-a0d5-4a2e45f808b6", 00:33:11.284 "strip_size_kb": 64, 00:33:11.284 "state": "configuring", 00:33:11.284 "raid_level": "concat", 00:33:11.284 "superblock": true, 00:33:11.284 "num_base_bdevs": 3, 00:33:11.284 "num_base_bdevs_discovered": 1, 00:33:11.284 "num_base_bdevs_operational": 3, 00:33:11.284 "base_bdevs_list": [ 00:33:11.284 { 00:33:11.284 "name": "pt1", 00:33:11.284 "uuid": "689f1e95-35fb-5e39-bd87-c6c46218bc0a", 00:33:11.284 "is_configured": true, 00:33:11.284 "data_offset": 2048, 00:33:11.284 "data_size": 63488 00:33:11.284 }, 00:33:11.284 { 00:33:11.284 "name": null, 00:33:11.284 "uuid": "fc6704e9-e63d-5162-9cea-4b5374187ad4", 00:33:11.284 "is_configured": false, 00:33:11.284 "data_offset": 2048, 00:33:11.284 "data_size": 63488 00:33:11.284 }, 00:33:11.284 { 00:33:11.284 "name": null, 00:33:11.284 "uuid": "d3eac037-3405-5e7a-9543-c28131bdb77f", 00:33:11.284 "is_configured": false, 00:33:11.284 "data_offset": 2048, 00:33:11.284 "data_size": 63488 00:33:11.284 } 00:33:11.284 ] 00:33:11.284 }' 00:33:11.284 19:26:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:11.284 19:26:27 -- common/autotest_common.sh@10 -- # set +x 00:33:12.220 19:26:27 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:33:12.220 19:26:27 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:12.220 [2024-04-18 19:26:28.033571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:12.220 [2024-04-18 19:26:28.033681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.220 [2024-04-18 19:26:28.033732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:12.220 [2024-04-18 19:26:28.033756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.220 [2024-04-18 19:26:28.034241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.220 [2024-04-18 19:26:28.034281] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:12.220 [2024-04-18 19:26:28.034413] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:33:12.220 [2024-04-18 19:26:28.034439] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:12.220 pt2 00:33:12.220 19:26:28 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:12.478 [2024-04-18 19:26:28.333700] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.478 19:26:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.736 19:26:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:12.736 "name": "raid_bdev1", 00:33:12.736 "uuid": "7d4d8a38-7905-4609-a0d5-4a2e45f808b6", 00:33:12.736 "strip_size_kb": 64, 00:33:12.736 "state": "configuring", 00:33:12.736 "raid_level": "concat", 00:33:12.736 "superblock": true, 00:33:12.736 "num_base_bdevs": 3, 00:33:12.736 "num_base_bdevs_discovered": 1, 00:33:12.736 "num_base_bdevs_operational": 3, 00:33:12.736 "base_bdevs_list": [ 00:33:12.736 { 00:33:12.736 "name": "pt1", 00:33:12.736 "uuid": "689f1e95-35fb-5e39-bd87-c6c46218bc0a", 00:33:12.736 "is_configured": true, 00:33:12.736 "data_offset": 2048, 00:33:12.736 "data_size": 63488 00:33:12.736 }, 00:33:12.736 { 00:33:12.736 "name": null, 00:33:12.736 "uuid": "fc6704e9-e63d-5162-9cea-4b5374187ad4", 00:33:12.736 "is_configured": false, 00:33:12.736 "data_offset": 2048, 00:33:12.736 "data_size": 63488 00:33:12.736 }, 00:33:12.736 { 00:33:12.736 "name": null, 00:33:12.736 "uuid": "d3eac037-3405-5e7a-9543-c28131bdb77f", 00:33:12.736 "is_configured": false, 00:33:12.736 "data_offset": 2048, 00:33:12.736 "data_size": 63488 00:33:12.736 } 00:33:12.736 ] 00:33:12.736 }' 00:33:12.736 19:26:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:12.736 19:26:28 -- common/autotest_common.sh@10 -- # set +x 00:33:13.671 19:26:29 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:33:13.671 19:26:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:33:13.671 19:26:29 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:13.671 [2024-04-18 19:26:29.517862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:13.671 [2024-04-18 19:26:29.517970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:13.671 [2024-04-18 19:26:29.518010] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:13.671 [2024-04-18 19:26:29.518046] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:13.671 [2024-04-18 19:26:29.518517] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:13.671 [2024-04-18 19:26:29.518562] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:13.671 [2024-04-18 19:26:29.518696] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:33:13.671 [2024-04-18 19:26:29.518721] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:13.671 pt2 00:33:13.671 19:26:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:33:13.671 19:26:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:33:13.671 19:26:29 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:13.930 [2024-04-18 19:26:29.745912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:13.930 [2024-04-18 19:26:29.745998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:13.930 [2024-04-18 19:26:29.746039] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:33:13.930 [2024-04-18 19:26:29.746066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:13.930 [2024-04-18 19:26:29.746511] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:13.930 [2024-04-18 19:26:29.746554] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:13.930 [2024-04-18 19:26:29.746682] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:33:13.930 [2024-04-18 19:26:29.746713] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:13.930 [2024-04-18 19:26:29.746833] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:33:13.930 [2024-04-18 19:26:29.746849] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:13.930 [2024-04-18 19:26:29.746978] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:13.930 [2024-04-18 19:26:29.747286] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:33:13.930 [2024-04-18 19:26:29.747304] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:33:13.930 [2024-04-18 19:26:29.747459] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:13.930 pt3 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.930 19:26:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.188 19:26:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:14.188 "name": "raid_bdev1", 00:33:14.188 "uuid": "7d4d8a38-7905-4609-a0d5-4a2e45f808b6", 00:33:14.188 "strip_size_kb": 64, 00:33:14.188 "state": "online", 00:33:14.188 "raid_level": "concat", 00:33:14.188 "superblock": true, 00:33:14.188 "num_base_bdevs": 3, 00:33:14.188 "num_base_bdevs_discovered": 3, 00:33:14.188 "num_base_bdevs_operational": 3, 00:33:14.188 "base_bdevs_list": [ 00:33:14.188 { 00:33:14.188 "name": "pt1", 00:33:14.188 "uuid": "689f1e95-35fb-5e39-bd87-c6c46218bc0a", 00:33:14.188 "is_configured": true, 00:33:14.188 "data_offset": 2048, 00:33:14.188 "data_size": 63488 00:33:14.188 }, 00:33:14.188 { 00:33:14.188 "name": "pt2", 00:33:14.188 "uuid": "fc6704e9-e63d-5162-9cea-4b5374187ad4", 00:33:14.188 "is_configured": true, 00:33:14.188 "data_offset": 2048, 00:33:14.188 "data_size": 63488 00:33:14.188 }, 00:33:14.188 { 00:33:14.188 "name": "pt3", 00:33:14.188 "uuid": "d3eac037-3405-5e7a-9543-c28131bdb77f", 00:33:14.188 "is_configured": true, 00:33:14.188 "data_offset": 2048, 00:33:14.188 "data_size": 63488 00:33:14.188 } 00:33:14.188 ] 00:33:14.188 }' 00:33:14.188 19:26:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:14.188 19:26:30 -- common/autotest_common.sh@10 -- # set +x 00:33:15.123 19:26:30 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:15.123 19:26:30 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:33:15.123 [2024-04-18 19:26:31.050546] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:15.381 19:26:31 -- bdev/bdev_raid.sh@430 -- # '[' 7d4d8a38-7905-4609-a0d5-4a2e45f808b6 '!=' 7d4d8a38-7905-4609-a0d5-4a2e45f808b6 ']' 00:33:15.381 19:26:31 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:33:15.381 19:26:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:33:15.381 19:26:31 -- bdev/bdev_raid.sh@197 -- # return 1 00:33:15.381 19:26:31 -- bdev/bdev_raid.sh@511 -- # killprocess 126165 00:33:15.381 19:26:31 -- common/autotest_common.sh@936 -- # '[' -z 126165 ']' 00:33:15.381 19:26:31 -- common/autotest_common.sh@940 -- # kill -0 126165 00:33:15.381 19:26:31 -- common/autotest_common.sh@941 -- # uname 00:33:15.381 19:26:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:15.381 19:26:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126165 00:33:15.381 killing process with pid 126165 00:33:15.381 19:26:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:15.381 19:26:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:15.381 19:26:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126165' 00:33:15.381 19:26:31 -- common/autotest_common.sh@955 -- # kill 126165 00:33:15.381 19:26:31 -- common/autotest_common.sh@960 -- # wait 126165 00:33:15.381 [2024-04-18 19:26:31.093819] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:15.381 [2024-04-18 19:26:31.093938] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:15.381 [2024-04-18 19:26:31.094744] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:15.381 [2024-04-18 19:26:31.094785] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:33:15.640 [2024-04-18 19:26:31.429579] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:17.016 ************************************ 00:33:17.016 END TEST raid_superblock_test 00:33:17.016 ************************************ 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@513 -- # return 0 00:33:17.016 00:33:17.016 real 0m12.873s 00:33:17.016 user 0m22.137s 00:33:17.016 sys 0m1.522s 00:33:17.016 19:26:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:17.016 19:26:32 -- common/autotest_common.sh@10 -- # set +x 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:33:17.016 19:26:32 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:33:17.016 19:26:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:17.016 19:26:32 -- common/autotest_common.sh@10 -- # set +x 00:33:17.016 ************************************ 00:33:17.016 START TEST raid_state_function_test 00:33:17.016 ************************************ 00:33:17.016 19:26:32 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 3 false 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:33:17.016 19:26:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:33:17.017 19:26:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:33:17.017 19:26:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:33:17.017 19:26:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:33:17.017 19:26:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:33:17.017 19:26:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:33:17.275 19:26:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:33:17.275 19:26:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=126518 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126518' 00:33:17.276 Process raid pid: 126518 00:33:17.276 19:26:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126518 /var/tmp/spdk-raid.sock 00:33:17.276 19:26:32 -- common/autotest_common.sh@817 -- # '[' -z 126518 ']' 00:33:17.276 19:26:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:17.276 19:26:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:17.276 19:26:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:17.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:17.276 19:26:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:17.276 19:26:32 -- common/autotest_common.sh@10 -- # set +x 00:33:17.276 [2024-04-18 19:26:33.015284] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:33:17.276 [2024-04-18 19:26:33.015554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.276 [2024-04-18 19:26:33.196847] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.843 [2024-04-18 19:26:33.475310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.843 [2024-04-18 19:26:33.703681] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:18.102 19:26:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:18.102 19:26:33 -- common/autotest_common.sh@850 -- # return 0 00:33:18.102 19:26:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:33:18.361 [2024-04-18 19:26:34.233892] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:18.361 [2024-04-18 19:26:34.233990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:18.361 [2024-04-18 19:26:34.234001] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:18.361 [2024-04-18 19:26:34.234036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:18.361 [2024-04-18 19:26:34.234044] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:18.361 [2024-04-18 19:26:34.234085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.361 19:26:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:18.637 19:26:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:18.637 "name": "Existed_Raid", 00:33:18.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.637 "strip_size_kb": 0, 00:33:18.637 "state": "configuring", 00:33:18.637 "raid_level": "raid1", 00:33:18.637 "superblock": false, 00:33:18.637 "num_base_bdevs": 3, 00:33:18.637 "num_base_bdevs_discovered": 0, 00:33:18.637 "num_base_bdevs_operational": 3, 00:33:18.637 "base_bdevs_list": [ 00:33:18.637 { 00:33:18.637 "name": "BaseBdev1", 00:33:18.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.637 "is_configured": false, 00:33:18.637 "data_offset": 0, 00:33:18.637 "data_size": 0 00:33:18.637 }, 00:33:18.637 { 00:33:18.637 "name": "BaseBdev2", 00:33:18.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.637 "is_configured": false, 00:33:18.637 "data_offset": 0, 00:33:18.637 "data_size": 0 00:33:18.637 }, 00:33:18.637 { 00:33:18.637 "name": "BaseBdev3", 00:33:18.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.637 "is_configured": false, 00:33:18.637 "data_offset": 0, 00:33:18.637 "data_size": 0 00:33:18.637 } 00:33:18.637 ] 00:33:18.637 }' 00:33:18.637 19:26:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:18.637 19:26:34 -- common/autotest_common.sh@10 -- # set +x 00:33:19.574 19:26:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:19.833 [2024-04-18 19:26:35.526100] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:19.833 [2024-04-18 19:26:35.526156] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:33:19.833 19:26:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:33:20.091 [2024-04-18 19:26:35.818157] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:20.091 [2024-04-18 19:26:35.818226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:20.091 [2024-04-18 19:26:35.818238] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:20.091 [2024-04-18 19:26:35.818266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:20.091 [2024-04-18 19:26:35.818274] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:20.091 [2024-04-18 19:26:35.818301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:20.091 19:26:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:20.350 [2024-04-18 19:26:36.090029] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:20.350 BaseBdev1 00:33:20.350 19:26:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:33:20.350 19:26:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:33:20.350 19:26:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:33:20.350 19:26:36 -- common/autotest_common.sh@887 -- # local i 00:33:20.350 19:26:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:33:20.350 19:26:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:33:20.350 19:26:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:20.607 19:26:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:20.902 [ 00:33:20.902 { 00:33:20.902 "name": "BaseBdev1", 00:33:20.902 "aliases": [ 00:33:20.902 "8707111c-f4d2-4cce-917b-c2402ad4c341" 00:33:20.902 ], 00:33:20.902 "product_name": "Malloc disk", 00:33:20.902 "block_size": 512, 00:33:20.902 "num_blocks": 65536, 00:33:20.902 "uuid": "8707111c-f4d2-4cce-917b-c2402ad4c341", 00:33:20.902 "assigned_rate_limits": { 00:33:20.902 "rw_ios_per_sec": 0, 00:33:20.902 "rw_mbytes_per_sec": 0, 00:33:20.902 "r_mbytes_per_sec": 0, 00:33:20.902 "w_mbytes_per_sec": 0 00:33:20.902 }, 00:33:20.902 "claimed": true, 00:33:20.902 "claim_type": "exclusive_write", 00:33:20.902 "zoned": false, 00:33:20.902 "supported_io_types": { 00:33:20.902 "read": true, 00:33:20.902 "write": true, 00:33:20.902 "unmap": true, 00:33:20.902 "write_zeroes": true, 00:33:20.902 "flush": true, 00:33:20.902 "reset": true, 00:33:20.902 "compare": false, 00:33:20.902 "compare_and_write": false, 00:33:20.902 "abort": true, 00:33:20.902 "nvme_admin": false, 00:33:20.902 "nvme_io": false 00:33:20.902 }, 00:33:20.902 "memory_domains": [ 00:33:20.902 { 00:33:20.902 "dma_device_id": "system", 00:33:20.902 "dma_device_type": 1 00:33:20.902 }, 00:33:20.902 { 00:33:20.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:20.902 "dma_device_type": 2 00:33:20.902 } 00:33:20.902 ], 00:33:20.902 "driver_specific": {} 00:33:20.902 } 00:33:20.902 ] 00:33:20.902 19:26:36 -- common/autotest_common.sh@893 -- # return 0 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.902 19:26:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:21.174 19:26:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:21.174 "name": "Existed_Raid", 00:33:21.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.174 "strip_size_kb": 0, 00:33:21.174 "state": "configuring", 00:33:21.174 "raid_level": "raid1", 00:33:21.174 "superblock": false, 00:33:21.174 "num_base_bdevs": 3, 00:33:21.174 "num_base_bdevs_discovered": 1, 00:33:21.174 "num_base_bdevs_operational": 3, 00:33:21.174 "base_bdevs_list": [ 00:33:21.174 { 00:33:21.174 "name": "BaseBdev1", 00:33:21.174 "uuid": "8707111c-f4d2-4cce-917b-c2402ad4c341", 00:33:21.174 "is_configured": true, 00:33:21.174 "data_offset": 0, 00:33:21.174 "data_size": 65536 00:33:21.174 }, 00:33:21.174 { 00:33:21.174 "name": "BaseBdev2", 00:33:21.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.174 "is_configured": false, 00:33:21.174 "data_offset": 0, 00:33:21.174 "data_size": 0 00:33:21.174 }, 00:33:21.174 { 00:33:21.174 "name": "BaseBdev3", 00:33:21.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.174 "is_configured": false, 00:33:21.174 "data_offset": 0, 00:33:21.174 "data_size": 0 00:33:21.174 } 00:33:21.174 ] 00:33:21.174 }' 00:33:21.174 19:26:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:21.174 19:26:36 -- common/autotest_common.sh@10 -- # set +x 00:33:21.739 19:26:37 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:21.997 [2024-04-18 19:26:37.814532] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:21.997 [2024-04-18 19:26:37.814615] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:33:21.997 19:26:37 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:33:21.997 19:26:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:33:22.255 [2024-04-18 19:26:38.082596] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:22.255 [2024-04-18 19:26:38.084791] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:22.255 [2024-04-18 19:26:38.084854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:22.255 [2024-04-18 19:26:38.084865] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:22.255 [2024-04-18 19:26:38.084890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.255 19:26:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:22.513 19:26:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:22.513 "name": "Existed_Raid", 00:33:22.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.513 "strip_size_kb": 0, 00:33:22.513 "state": "configuring", 00:33:22.513 "raid_level": "raid1", 00:33:22.513 "superblock": false, 00:33:22.513 "num_base_bdevs": 3, 00:33:22.513 "num_base_bdevs_discovered": 1, 00:33:22.513 "num_base_bdevs_operational": 3, 00:33:22.513 "base_bdevs_list": [ 00:33:22.513 { 00:33:22.513 "name": "BaseBdev1", 00:33:22.513 "uuid": "8707111c-f4d2-4cce-917b-c2402ad4c341", 00:33:22.513 "is_configured": true, 00:33:22.513 "data_offset": 0, 00:33:22.513 "data_size": 65536 00:33:22.513 }, 00:33:22.513 { 00:33:22.513 "name": "BaseBdev2", 00:33:22.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.513 "is_configured": false, 00:33:22.513 "data_offset": 0, 00:33:22.513 "data_size": 0 00:33:22.513 }, 00:33:22.513 { 00:33:22.513 "name": "BaseBdev3", 00:33:22.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.513 "is_configured": false, 00:33:22.513 "data_offset": 0, 00:33:22.513 "data_size": 0 00:33:22.513 } 00:33:22.513 ] 00:33:22.513 }' 00:33:22.513 19:26:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:22.513 19:26:38 -- common/autotest_common.sh@10 -- # set +x 00:33:23.447 19:26:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:23.447 [2024-04-18 19:26:39.306131] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:23.447 BaseBdev2 00:33:23.447 19:26:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:33:23.447 19:26:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:33:23.447 19:26:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:33:23.447 19:26:39 -- common/autotest_common.sh@887 -- # local i 00:33:23.447 19:26:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:33:23.447 19:26:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:33:23.447 19:26:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:23.707 19:26:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:23.965 [ 00:33:23.965 { 00:33:23.965 "name": "BaseBdev2", 00:33:23.965 "aliases": [ 00:33:23.965 "d56f4197-6e2a-42b8-a568-e56c032db373" 00:33:23.965 ], 00:33:23.965 "product_name": "Malloc disk", 00:33:23.965 "block_size": 512, 00:33:23.965 "num_blocks": 65536, 00:33:23.965 "uuid": "d56f4197-6e2a-42b8-a568-e56c032db373", 00:33:23.965 "assigned_rate_limits": { 00:33:23.965 "rw_ios_per_sec": 0, 00:33:23.965 "rw_mbytes_per_sec": 0, 00:33:23.965 "r_mbytes_per_sec": 0, 00:33:23.965 "w_mbytes_per_sec": 0 00:33:23.965 }, 00:33:23.965 "claimed": true, 00:33:23.965 "claim_type": "exclusive_write", 00:33:23.965 "zoned": false, 00:33:23.965 "supported_io_types": { 00:33:23.965 "read": true, 00:33:23.965 "write": true, 00:33:23.965 "unmap": true, 00:33:23.965 "write_zeroes": true, 00:33:23.965 "flush": true, 00:33:23.965 "reset": true, 00:33:23.965 "compare": false, 00:33:23.965 "compare_and_write": false, 00:33:23.965 "abort": true, 00:33:23.965 "nvme_admin": false, 00:33:23.965 "nvme_io": false 00:33:23.965 }, 00:33:23.965 "memory_domains": [ 00:33:23.965 { 00:33:23.965 "dma_device_id": "system", 00:33:23.965 "dma_device_type": 1 00:33:23.965 }, 00:33:23.965 { 00:33:23.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:23.965 "dma_device_type": 2 00:33:23.965 } 00:33:23.965 ], 00:33:23.965 "driver_specific": {} 00:33:23.965 } 00:33:23.965 ] 00:33:23.965 19:26:39 -- common/autotest_common.sh@893 -- # return 0 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.965 19:26:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:24.223 19:26:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:24.223 "name": "Existed_Raid", 00:33:24.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.223 "strip_size_kb": 0, 00:33:24.223 "state": "configuring", 00:33:24.223 "raid_level": "raid1", 00:33:24.223 "superblock": false, 00:33:24.223 "num_base_bdevs": 3, 00:33:24.223 "num_base_bdevs_discovered": 2, 00:33:24.223 "num_base_bdevs_operational": 3, 00:33:24.223 "base_bdevs_list": [ 00:33:24.223 { 00:33:24.223 "name": "BaseBdev1", 00:33:24.223 "uuid": "8707111c-f4d2-4cce-917b-c2402ad4c341", 00:33:24.223 "is_configured": true, 00:33:24.223 "data_offset": 0, 00:33:24.223 "data_size": 65536 00:33:24.223 }, 00:33:24.223 { 00:33:24.223 "name": "BaseBdev2", 00:33:24.223 "uuid": "d56f4197-6e2a-42b8-a568-e56c032db373", 00:33:24.223 "is_configured": true, 00:33:24.223 "data_offset": 0, 00:33:24.223 "data_size": 65536 00:33:24.223 }, 00:33:24.223 { 00:33:24.223 "name": "BaseBdev3", 00:33:24.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.223 "is_configured": false, 00:33:24.223 "data_offset": 0, 00:33:24.223 "data_size": 0 00:33:24.223 } 00:33:24.223 ] 00:33:24.223 }' 00:33:24.223 19:26:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:24.223 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:33:24.790 19:26:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:25.357 [2024-04-18 19:26:41.021030] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:25.357 [2024-04-18 19:26:41.021089] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:33:25.357 [2024-04-18 19:26:41.021116] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:25.357 [2024-04-18 19:26:41.021255] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:33:25.357 [2024-04-18 19:26:41.021604] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:33:25.357 [2024-04-18 19:26:41.021617] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:33:25.357 [2024-04-18 19:26:41.021880] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:25.357 BaseBdev3 00:33:25.357 19:26:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:33:25.357 19:26:41 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:33:25.357 19:26:41 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:33:25.357 19:26:41 -- common/autotest_common.sh@887 -- # local i 00:33:25.357 19:26:41 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:33:25.357 19:26:41 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:33:25.357 19:26:41 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:25.357 19:26:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:25.616 [ 00:33:25.616 { 00:33:25.616 "name": "BaseBdev3", 00:33:25.616 "aliases": [ 00:33:25.616 "b5afe0d6-12e0-4115-ae29-fac038bf3c41" 00:33:25.616 ], 00:33:25.616 "product_name": "Malloc disk", 00:33:25.616 "block_size": 512, 00:33:25.616 "num_blocks": 65536, 00:33:25.616 "uuid": "b5afe0d6-12e0-4115-ae29-fac038bf3c41", 00:33:25.616 "assigned_rate_limits": { 00:33:25.616 "rw_ios_per_sec": 0, 00:33:25.616 "rw_mbytes_per_sec": 0, 00:33:25.616 "r_mbytes_per_sec": 0, 00:33:25.616 "w_mbytes_per_sec": 0 00:33:25.616 }, 00:33:25.616 "claimed": true, 00:33:25.616 "claim_type": "exclusive_write", 00:33:25.616 "zoned": false, 00:33:25.616 "supported_io_types": { 00:33:25.616 "read": true, 00:33:25.616 "write": true, 00:33:25.616 "unmap": true, 00:33:25.616 "write_zeroes": true, 00:33:25.616 "flush": true, 00:33:25.616 "reset": true, 00:33:25.616 "compare": false, 00:33:25.616 "compare_and_write": false, 00:33:25.616 "abort": true, 00:33:25.616 "nvme_admin": false, 00:33:25.616 "nvme_io": false 00:33:25.616 }, 00:33:25.616 "memory_domains": [ 00:33:25.616 { 00:33:25.616 "dma_device_id": "system", 00:33:25.616 "dma_device_type": 1 00:33:25.616 }, 00:33:25.616 { 00:33:25.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:25.616 "dma_device_type": 2 00:33:25.616 } 00:33:25.616 ], 00:33:25.616 "driver_specific": {} 00:33:25.616 } 00:33:25.616 ] 00:33:25.616 19:26:41 -- common/autotest_common.sh@893 -- # return 0 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:25.616 19:26:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.183 19:26:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:26.183 "name": "Existed_Raid", 00:33:26.183 "uuid": "27a627fe-1969-4915-900d-fb65bf818d35", 00:33:26.183 "strip_size_kb": 0, 00:33:26.183 "state": "online", 00:33:26.183 "raid_level": "raid1", 00:33:26.183 "superblock": false, 00:33:26.183 "num_base_bdevs": 3, 00:33:26.183 "num_base_bdevs_discovered": 3, 00:33:26.183 "num_base_bdevs_operational": 3, 00:33:26.183 "base_bdevs_list": [ 00:33:26.183 { 00:33:26.183 "name": "BaseBdev1", 00:33:26.183 "uuid": "8707111c-f4d2-4cce-917b-c2402ad4c341", 00:33:26.183 "is_configured": true, 00:33:26.183 "data_offset": 0, 00:33:26.183 "data_size": 65536 00:33:26.183 }, 00:33:26.183 { 00:33:26.183 "name": "BaseBdev2", 00:33:26.183 "uuid": "d56f4197-6e2a-42b8-a568-e56c032db373", 00:33:26.183 "is_configured": true, 00:33:26.183 "data_offset": 0, 00:33:26.183 "data_size": 65536 00:33:26.183 }, 00:33:26.183 { 00:33:26.183 "name": "BaseBdev3", 00:33:26.183 "uuid": "b5afe0d6-12e0-4115-ae29-fac038bf3c41", 00:33:26.183 "is_configured": true, 00:33:26.183 "data_offset": 0, 00:33:26.183 "data_size": 65536 00:33:26.183 } 00:33:26.183 ] 00:33:26.183 }' 00:33:26.183 19:26:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:26.183 19:26:41 -- common/autotest_common.sh@10 -- # set +x 00:33:26.785 19:26:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:27.044 [2024-04-18 19:26:42.729538] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@196 -- # return 0 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.044 19:26:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:27.303 19:26:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:27.303 "name": "Existed_Raid", 00:33:27.303 "uuid": "27a627fe-1969-4915-900d-fb65bf818d35", 00:33:27.303 "strip_size_kb": 0, 00:33:27.303 "state": "online", 00:33:27.303 "raid_level": "raid1", 00:33:27.303 "superblock": false, 00:33:27.303 "num_base_bdevs": 3, 00:33:27.303 "num_base_bdevs_discovered": 2, 00:33:27.303 "num_base_bdevs_operational": 2, 00:33:27.303 "base_bdevs_list": [ 00:33:27.303 { 00:33:27.303 "name": null, 00:33:27.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.303 "is_configured": false, 00:33:27.303 "data_offset": 0, 00:33:27.303 "data_size": 65536 00:33:27.303 }, 00:33:27.303 { 00:33:27.303 "name": "BaseBdev2", 00:33:27.303 "uuid": "d56f4197-6e2a-42b8-a568-e56c032db373", 00:33:27.303 "is_configured": true, 00:33:27.303 "data_offset": 0, 00:33:27.303 "data_size": 65536 00:33:27.303 }, 00:33:27.303 { 00:33:27.303 "name": "BaseBdev3", 00:33:27.303 "uuid": "b5afe0d6-12e0-4115-ae29-fac038bf3c41", 00:33:27.303 "is_configured": true, 00:33:27.303 "data_offset": 0, 00:33:27.303 "data_size": 65536 00:33:27.303 } 00:33:27.303 ] 00:33:27.303 }' 00:33:27.303 19:26:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:27.303 19:26:43 -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 19:26:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:33:27.869 19:26:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:33:27.869 19:26:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.869 19:26:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:33:28.127 19:26:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:33:28.127 19:26:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:28.127 19:26:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:28.385 [2024-04-18 19:26:44.199815] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:28.643 19:26:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:33:28.643 19:26:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:33:28.643 19:26:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:28.643 19:26:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:33:28.901 19:26:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:33:28.901 19:26:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:28.901 19:26:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:29.159 [2024-04-18 19:26:44.873079] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:29.159 [2024-04-18 19:26:44.873178] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:29.159 [2024-04-18 19:26:44.982580] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:29.159 [2024-04-18 19:26:44.982709] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:29.159 [2024-04-18 19:26:44.982722] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:33:29.159 19:26:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:33:29.159 19:26:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:33:29.159 19:26:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.159 19:26:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:33:29.419 19:26:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:33:29.419 19:26:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:33:29.419 19:26:45 -- bdev/bdev_raid.sh@287 -- # killprocess 126518 00:33:29.419 19:26:45 -- common/autotest_common.sh@936 -- # '[' -z 126518 ']' 00:33:29.419 19:26:45 -- common/autotest_common.sh@940 -- # kill -0 126518 00:33:29.419 19:26:45 -- common/autotest_common.sh@941 -- # uname 00:33:29.419 19:26:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:29.419 19:26:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126518 00:33:29.419 killing process with pid 126518 00:33:29.419 19:26:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:29.419 19:26:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:29.419 19:26:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126518' 00:33:29.419 19:26:45 -- common/autotest_common.sh@955 -- # kill 126518 00:33:29.419 19:26:45 -- common/autotest_common.sh@960 -- # wait 126518 00:33:29.419 [2024-04-18 19:26:45.322385] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:29.419 [2024-04-18 19:26:45.322543] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:30.845 ************************************ 00:33:30.845 END TEST raid_state_function_test 00:33:30.845 ************************************ 00:33:30.845 19:26:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:33:30.845 00:33:30.845 real 0m13.805s 00:33:30.845 user 0m24.158s 00:33:30.845 sys 0m1.621s 00:33:30.845 19:26:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:30.845 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:33:31.103 19:26:46 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:33:31.104 19:26:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:33:31.104 19:26:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:31.104 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:33:31.104 ************************************ 00:33:31.104 START TEST raid_state_function_test_sb 00:33:31.104 ************************************ 00:33:31.104 19:26:46 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 3 true 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=126945 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126945' 00:33:31.104 Process raid pid: 126945 00:33:31.104 19:26:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126945 /var/tmp/spdk-raid.sock 00:33:31.104 19:26:46 -- common/autotest_common.sh@817 -- # '[' -z 126945 ']' 00:33:31.104 19:26:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:31.104 19:26:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:31.104 19:26:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:31.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:31.104 19:26:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:31.104 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:33:31.104 [2024-04-18 19:26:46.899571] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:33:31.104 [2024-04-18 19:26:46.899755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.362 [2024-04-18 19:26:47.088974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.620 [2024-04-18 19:26:47.356476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.899 [2024-04-18 19:26:47.602575] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:32.158 19:26:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:32.158 19:26:47 -- common/autotest_common.sh@850 -- # return 0 00:33:32.158 19:26:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:33:32.416 [2024-04-18 19:26:48.184540] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:32.416 [2024-04-18 19:26:48.185417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:32.416 [2024-04-18 19:26:48.185606] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:32.416 [2024-04-18 19:26:48.185849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:32.416 [2024-04-18 19:26:48.185999] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:32.416 [2024-04-18 19:26:48.186272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.416 19:26:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:32.674 19:26:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:32.674 "name": "Existed_Raid", 00:33:32.674 "uuid": "4d8e42a9-a418-4d42-ab04-8b732daae3f4", 00:33:32.674 "strip_size_kb": 0, 00:33:32.674 "state": "configuring", 00:33:32.674 "raid_level": "raid1", 00:33:32.674 "superblock": true, 00:33:32.674 "num_base_bdevs": 3, 00:33:32.674 "num_base_bdevs_discovered": 0, 00:33:32.674 "num_base_bdevs_operational": 3, 00:33:32.674 "base_bdevs_list": [ 00:33:32.674 { 00:33:32.674 "name": "BaseBdev1", 00:33:32.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.674 "is_configured": false, 00:33:32.674 "data_offset": 0, 00:33:32.674 "data_size": 0 00:33:32.674 }, 00:33:32.674 { 00:33:32.674 "name": "BaseBdev2", 00:33:32.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.674 "is_configured": false, 00:33:32.674 "data_offset": 0, 00:33:32.674 "data_size": 0 00:33:32.674 }, 00:33:32.674 { 00:33:32.674 "name": "BaseBdev3", 00:33:32.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.674 "is_configured": false, 00:33:32.674 "data_offset": 0, 00:33:32.674 "data_size": 0 00:33:32.674 } 00:33:32.674 ] 00:33:32.674 }' 00:33:32.674 19:26:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:32.674 19:26:48 -- common/autotest_common.sh@10 -- # set +x 00:33:33.609 19:26:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:33.609 [2024-04-18 19:26:49.516569] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:33.609 [2024-04-18 19:26:49.516803] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:33:33.609 19:26:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:33:34.175 [2024-04-18 19:26:49.828702] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:34.175 [2024-04-18 19:26:49.829322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:34.175 [2024-04-18 19:26:49.829448] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:34.175 [2024-04-18 19:26:49.829612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:34.175 [2024-04-18 19:26:49.829755] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:34.175 [2024-04-18 19:26:49.829911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:34.175 19:26:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:34.441 [2024-04-18 19:26:50.144041] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:34.441 BaseBdev1 00:33:34.441 19:26:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:33:34.442 19:26:50 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:33:34.442 19:26:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:33:34.442 19:26:50 -- common/autotest_common.sh@887 -- # local i 00:33:34.442 19:26:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:33:34.442 19:26:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:33:34.442 19:26:50 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:34.701 19:26:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:34.979 [ 00:33:34.979 { 00:33:34.979 "name": "BaseBdev1", 00:33:34.979 "aliases": [ 00:33:34.979 "35b43785-e6ca-4c3b-8bcf-bf9d9966445c" 00:33:34.979 ], 00:33:34.979 "product_name": "Malloc disk", 00:33:34.979 "block_size": 512, 00:33:34.979 "num_blocks": 65536, 00:33:34.979 "uuid": "35b43785-e6ca-4c3b-8bcf-bf9d9966445c", 00:33:34.979 "assigned_rate_limits": { 00:33:34.979 "rw_ios_per_sec": 0, 00:33:34.979 "rw_mbytes_per_sec": 0, 00:33:34.979 "r_mbytes_per_sec": 0, 00:33:34.979 "w_mbytes_per_sec": 0 00:33:34.979 }, 00:33:34.979 "claimed": true, 00:33:34.979 "claim_type": "exclusive_write", 00:33:34.979 "zoned": false, 00:33:34.979 "supported_io_types": { 00:33:34.979 "read": true, 00:33:34.979 "write": true, 00:33:34.979 "unmap": true, 00:33:34.979 "write_zeroes": true, 00:33:34.979 "flush": true, 00:33:34.979 "reset": true, 00:33:34.979 "compare": false, 00:33:34.979 "compare_and_write": false, 00:33:34.979 "abort": true, 00:33:34.980 "nvme_admin": false, 00:33:34.980 "nvme_io": false 00:33:34.980 }, 00:33:34.980 "memory_domains": [ 00:33:34.980 { 00:33:34.980 "dma_device_id": "system", 00:33:34.980 "dma_device_type": 1 00:33:34.980 }, 00:33:34.980 { 00:33:34.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:34.980 "dma_device_type": 2 00:33:34.980 } 00:33:34.980 ], 00:33:34.980 "driver_specific": {} 00:33:34.980 } 00:33:34.980 ] 00:33:34.980 19:26:50 -- common/autotest_common.sh@893 -- # return 0 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.980 19:26:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:35.244 19:26:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:35.244 "name": "Existed_Raid", 00:33:35.244 "uuid": "25e240f1-0481-4642-911a-f03b8b2e9cab", 00:33:35.244 "strip_size_kb": 0, 00:33:35.244 "state": "configuring", 00:33:35.244 "raid_level": "raid1", 00:33:35.244 "superblock": true, 00:33:35.244 "num_base_bdevs": 3, 00:33:35.244 "num_base_bdevs_discovered": 1, 00:33:35.244 "num_base_bdevs_operational": 3, 00:33:35.244 "base_bdevs_list": [ 00:33:35.244 { 00:33:35.244 "name": "BaseBdev1", 00:33:35.244 "uuid": "35b43785-e6ca-4c3b-8bcf-bf9d9966445c", 00:33:35.244 "is_configured": true, 00:33:35.244 "data_offset": 2048, 00:33:35.244 "data_size": 63488 00:33:35.244 }, 00:33:35.244 { 00:33:35.244 "name": "BaseBdev2", 00:33:35.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.244 "is_configured": false, 00:33:35.244 "data_offset": 0, 00:33:35.244 "data_size": 0 00:33:35.244 }, 00:33:35.244 { 00:33:35.244 "name": "BaseBdev3", 00:33:35.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.244 "is_configured": false, 00:33:35.244 "data_offset": 0, 00:33:35.244 "data_size": 0 00:33:35.244 } 00:33:35.244 ] 00:33:35.244 }' 00:33:35.244 19:26:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:35.244 19:26:50 -- common/autotest_common.sh@10 -- # set +x 00:33:35.810 19:26:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:36.068 [2024-04-18 19:26:51.948509] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:36.068 [2024-04-18 19:26:51.948743] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:33:36.068 19:26:51 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:33:36.068 19:26:51 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:36.634 19:26:52 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:36.891 BaseBdev1 00:33:36.891 19:26:52 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:33:36.891 19:26:52 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:33:36.891 19:26:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:33:36.891 19:26:52 -- common/autotest_common.sh@887 -- # local i 00:33:36.891 19:26:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:33:36.891 19:26:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:33:36.891 19:26:52 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:37.148 19:26:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:37.407 [ 00:33:37.407 { 00:33:37.407 "name": "BaseBdev1", 00:33:37.407 "aliases": [ 00:33:37.407 "5808e44c-c3b0-4df2-8c6b-d9a39e151674" 00:33:37.407 ], 00:33:37.407 "product_name": "Malloc disk", 00:33:37.407 "block_size": 512, 00:33:37.407 "num_blocks": 65536, 00:33:37.407 "uuid": "5808e44c-c3b0-4df2-8c6b-d9a39e151674", 00:33:37.407 "assigned_rate_limits": { 00:33:37.407 "rw_ios_per_sec": 0, 00:33:37.407 "rw_mbytes_per_sec": 0, 00:33:37.407 "r_mbytes_per_sec": 0, 00:33:37.407 "w_mbytes_per_sec": 0 00:33:37.407 }, 00:33:37.407 "claimed": false, 00:33:37.407 "zoned": false, 00:33:37.407 "supported_io_types": { 00:33:37.407 "read": true, 00:33:37.407 "write": true, 00:33:37.407 "unmap": true, 00:33:37.407 "write_zeroes": true, 00:33:37.407 "flush": true, 00:33:37.407 "reset": true, 00:33:37.407 "compare": false, 00:33:37.407 "compare_and_write": false, 00:33:37.407 "abort": true, 00:33:37.407 "nvme_admin": false, 00:33:37.407 "nvme_io": false 00:33:37.407 }, 00:33:37.407 "memory_domains": [ 00:33:37.407 { 00:33:37.407 "dma_device_id": "system", 00:33:37.407 "dma_device_type": 1 00:33:37.407 }, 00:33:37.407 { 00:33:37.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.407 "dma_device_type": 2 00:33:37.407 } 00:33:37.407 ], 00:33:37.407 "driver_specific": {} 00:33:37.407 } 00:33:37.407 ] 00:33:37.407 19:26:53 -- common/autotest_common.sh@893 -- # return 0 00:33:37.407 19:26:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:33:37.665 [2024-04-18 19:26:53.455227] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:37.665 [2024-04-18 19:26:53.457413] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:37.665 [2024-04-18 19:26:53.457918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:37.665 [2024-04-18 19:26:53.457950] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:37.665 [2024-04-18 19:26:53.458067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:37.665 19:26:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.923 19:26:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:37.923 "name": "Existed_Raid", 00:33:37.923 "uuid": "9ace06ca-a921-41b0-96d9-227e659b2dfd", 00:33:37.923 "strip_size_kb": 0, 00:33:37.923 "state": "configuring", 00:33:37.923 "raid_level": "raid1", 00:33:37.923 "superblock": true, 00:33:37.923 "num_base_bdevs": 3, 00:33:37.923 "num_base_bdevs_discovered": 1, 00:33:37.923 "num_base_bdevs_operational": 3, 00:33:37.923 "base_bdevs_list": [ 00:33:37.923 { 00:33:37.923 "name": "BaseBdev1", 00:33:37.923 "uuid": "5808e44c-c3b0-4df2-8c6b-d9a39e151674", 00:33:37.923 "is_configured": true, 00:33:37.923 "data_offset": 2048, 00:33:37.923 "data_size": 63488 00:33:37.923 }, 00:33:37.923 { 00:33:37.923 "name": "BaseBdev2", 00:33:37.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.923 "is_configured": false, 00:33:37.923 "data_offset": 0, 00:33:37.923 "data_size": 0 00:33:37.923 }, 00:33:37.923 { 00:33:37.923 "name": "BaseBdev3", 00:33:37.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.923 "is_configured": false, 00:33:37.923 "data_offset": 0, 00:33:37.923 "data_size": 0 00:33:37.923 } 00:33:37.923 ] 00:33:37.923 }' 00:33:37.924 19:26:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:37.924 19:26:53 -- common/autotest_common.sh@10 -- # set +x 00:33:38.858 19:26:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:39.114 [2024-04-18 19:26:54.802148] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:39.114 BaseBdev2 00:33:39.114 19:26:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:33:39.114 19:26:54 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:33:39.114 19:26:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:33:39.114 19:26:54 -- common/autotest_common.sh@887 -- # local i 00:33:39.114 19:26:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:33:39.114 19:26:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:33:39.114 19:26:54 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:39.372 19:26:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:39.372 [ 00:33:39.372 { 00:33:39.372 "name": "BaseBdev2", 00:33:39.372 "aliases": [ 00:33:39.372 "29641426-5b92-466d-b795-73a5dc23241d" 00:33:39.372 ], 00:33:39.372 "product_name": "Malloc disk", 00:33:39.372 "block_size": 512, 00:33:39.372 "num_blocks": 65536, 00:33:39.372 "uuid": "29641426-5b92-466d-b795-73a5dc23241d", 00:33:39.372 "assigned_rate_limits": { 00:33:39.372 "rw_ios_per_sec": 0, 00:33:39.372 "rw_mbytes_per_sec": 0, 00:33:39.372 "r_mbytes_per_sec": 0, 00:33:39.372 "w_mbytes_per_sec": 0 00:33:39.372 }, 00:33:39.372 "claimed": true, 00:33:39.372 "claim_type": "exclusive_write", 00:33:39.372 "zoned": false, 00:33:39.372 "supported_io_types": { 00:33:39.372 "read": true, 00:33:39.372 "write": true, 00:33:39.372 "unmap": true, 00:33:39.372 "write_zeroes": true, 00:33:39.372 "flush": true, 00:33:39.372 "reset": true, 00:33:39.372 "compare": false, 00:33:39.372 "compare_and_write": false, 00:33:39.372 "abort": true, 00:33:39.372 "nvme_admin": false, 00:33:39.372 "nvme_io": false 00:33:39.372 }, 00:33:39.372 "memory_domains": [ 00:33:39.372 { 00:33:39.372 "dma_device_id": "system", 00:33:39.372 "dma_device_type": 1 00:33:39.372 }, 00:33:39.372 { 00:33:39.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:39.372 "dma_device_type": 2 00:33:39.372 } 00:33:39.372 ], 00:33:39.372 "driver_specific": {} 00:33:39.372 } 00:33:39.372 ] 00:33:39.372 19:26:55 -- common/autotest_common.sh@893 -- # return 0 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.372 19:26:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:39.630 19:26:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:39.630 "name": "Existed_Raid", 00:33:39.630 "uuid": "9ace06ca-a921-41b0-96d9-227e659b2dfd", 00:33:39.630 "strip_size_kb": 0, 00:33:39.630 "state": "configuring", 00:33:39.630 "raid_level": "raid1", 00:33:39.630 "superblock": true, 00:33:39.630 "num_base_bdevs": 3, 00:33:39.630 "num_base_bdevs_discovered": 2, 00:33:39.630 "num_base_bdevs_operational": 3, 00:33:39.630 "base_bdevs_list": [ 00:33:39.630 { 00:33:39.630 "name": "BaseBdev1", 00:33:39.630 "uuid": "5808e44c-c3b0-4df2-8c6b-d9a39e151674", 00:33:39.630 "is_configured": true, 00:33:39.630 "data_offset": 2048, 00:33:39.630 "data_size": 63488 00:33:39.630 }, 00:33:39.630 { 00:33:39.630 "name": "BaseBdev2", 00:33:39.630 "uuid": "29641426-5b92-466d-b795-73a5dc23241d", 00:33:39.630 "is_configured": true, 00:33:39.630 "data_offset": 2048, 00:33:39.630 "data_size": 63488 00:33:39.630 }, 00:33:39.630 { 00:33:39.630 "name": "BaseBdev3", 00:33:39.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.630 "is_configured": false, 00:33:39.630 "data_offset": 0, 00:33:39.630 "data_size": 0 00:33:39.630 } 00:33:39.630 ] 00:33:39.630 }' 00:33:39.630 19:26:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:39.630 19:26:55 -- common/autotest_common.sh@10 -- # set +x 00:33:40.561 19:26:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:40.561 [2024-04-18 19:26:56.439284] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:40.561 [2024-04-18 19:26:56.439537] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:33:40.561 [2024-04-18 19:26:56.439553] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:40.561 [2024-04-18 19:26:56.439696] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:33:40.561 BaseBdev3 00:33:40.561 [2024-04-18 19:26:56.440037] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:33:40.561 [2024-04-18 19:26:56.440049] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:33:40.561 [2024-04-18 19:26:56.440199] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:40.561 19:26:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:33:40.561 19:26:56 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:33:40.561 19:26:56 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:33:40.561 19:26:56 -- common/autotest_common.sh@887 -- # local i 00:33:40.561 19:26:56 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:33:40.561 19:26:56 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:33:40.561 19:26:56 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:40.819 19:26:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:41.123 [ 00:33:41.123 { 00:33:41.123 "name": "BaseBdev3", 00:33:41.123 "aliases": [ 00:33:41.123 "c0ccfac2-e3d0-4cbc-b999-eed21879e8e7" 00:33:41.123 ], 00:33:41.123 "product_name": "Malloc disk", 00:33:41.123 "block_size": 512, 00:33:41.123 "num_blocks": 65536, 00:33:41.123 "uuid": "c0ccfac2-e3d0-4cbc-b999-eed21879e8e7", 00:33:41.123 "assigned_rate_limits": { 00:33:41.123 "rw_ios_per_sec": 0, 00:33:41.123 "rw_mbytes_per_sec": 0, 00:33:41.123 "r_mbytes_per_sec": 0, 00:33:41.123 "w_mbytes_per_sec": 0 00:33:41.123 }, 00:33:41.123 "claimed": true, 00:33:41.123 "claim_type": "exclusive_write", 00:33:41.123 "zoned": false, 00:33:41.123 "supported_io_types": { 00:33:41.123 "read": true, 00:33:41.123 "write": true, 00:33:41.123 "unmap": true, 00:33:41.123 "write_zeroes": true, 00:33:41.123 "flush": true, 00:33:41.123 "reset": true, 00:33:41.123 "compare": false, 00:33:41.123 "compare_and_write": false, 00:33:41.123 "abort": true, 00:33:41.123 "nvme_admin": false, 00:33:41.123 "nvme_io": false 00:33:41.123 }, 00:33:41.123 "memory_domains": [ 00:33:41.123 { 00:33:41.123 "dma_device_id": "system", 00:33:41.123 "dma_device_type": 1 00:33:41.123 }, 00:33:41.123 { 00:33:41.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.123 "dma_device_type": 2 00:33:41.123 } 00:33:41.123 ], 00:33:41.123 "driver_specific": {} 00:33:41.123 } 00:33:41.123 ] 00:33:41.123 19:26:56 -- common/autotest_common.sh@893 -- # return 0 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.123 19:26:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:41.405 19:26:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:41.405 "name": "Existed_Raid", 00:33:41.405 "uuid": "9ace06ca-a921-41b0-96d9-227e659b2dfd", 00:33:41.405 "strip_size_kb": 0, 00:33:41.405 "state": "online", 00:33:41.405 "raid_level": "raid1", 00:33:41.405 "superblock": true, 00:33:41.405 "num_base_bdevs": 3, 00:33:41.405 "num_base_bdevs_discovered": 3, 00:33:41.405 "num_base_bdevs_operational": 3, 00:33:41.405 "base_bdevs_list": [ 00:33:41.405 { 00:33:41.405 "name": "BaseBdev1", 00:33:41.405 "uuid": "5808e44c-c3b0-4df2-8c6b-d9a39e151674", 00:33:41.405 "is_configured": true, 00:33:41.405 "data_offset": 2048, 00:33:41.405 "data_size": 63488 00:33:41.405 }, 00:33:41.405 { 00:33:41.405 "name": "BaseBdev2", 00:33:41.405 "uuid": "29641426-5b92-466d-b795-73a5dc23241d", 00:33:41.405 "is_configured": true, 00:33:41.405 "data_offset": 2048, 00:33:41.405 "data_size": 63488 00:33:41.405 }, 00:33:41.405 { 00:33:41.405 "name": "BaseBdev3", 00:33:41.405 "uuid": "c0ccfac2-e3d0-4cbc-b999-eed21879e8e7", 00:33:41.405 "is_configured": true, 00:33:41.405 "data_offset": 2048, 00:33:41.405 "data_size": 63488 00:33:41.405 } 00:33:41.405 ] 00:33:41.405 }' 00:33:41.405 19:26:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:41.405 19:26:57 -- common/autotest_common.sh@10 -- # set +x 00:33:41.972 19:26:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:42.229 [2024-04-18 19:26:58.139865] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@196 -- # return 0 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.487 19:26:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:42.746 19:26:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:42.746 "name": "Existed_Raid", 00:33:42.746 "uuid": "9ace06ca-a921-41b0-96d9-227e659b2dfd", 00:33:42.746 "strip_size_kb": 0, 00:33:42.746 "state": "online", 00:33:42.746 "raid_level": "raid1", 00:33:42.746 "superblock": true, 00:33:42.746 "num_base_bdevs": 3, 00:33:42.746 "num_base_bdevs_discovered": 2, 00:33:42.746 "num_base_bdevs_operational": 2, 00:33:42.746 "base_bdevs_list": [ 00:33:42.746 { 00:33:42.746 "name": null, 00:33:42.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.746 "is_configured": false, 00:33:42.746 "data_offset": 2048, 00:33:42.746 "data_size": 63488 00:33:42.746 }, 00:33:42.746 { 00:33:42.746 "name": "BaseBdev2", 00:33:42.746 "uuid": "29641426-5b92-466d-b795-73a5dc23241d", 00:33:42.746 "is_configured": true, 00:33:42.746 "data_offset": 2048, 00:33:42.746 "data_size": 63488 00:33:42.746 }, 00:33:42.746 { 00:33:42.746 "name": "BaseBdev3", 00:33:42.746 "uuid": "c0ccfac2-e3d0-4cbc-b999-eed21879e8e7", 00:33:42.746 "is_configured": true, 00:33:42.746 "data_offset": 2048, 00:33:42.746 "data_size": 63488 00:33:42.746 } 00:33:42.746 ] 00:33:42.746 }' 00:33:42.746 19:26:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:42.746 19:26:58 -- common/autotest_common.sh@10 -- # set +x 00:33:43.682 19:26:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:33:43.682 19:26:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:33:43.682 19:26:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.682 19:26:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:33:43.682 19:26:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:33:43.682 19:26:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:43.682 19:26:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:43.940 [2024-04-18 19:26:59.822234] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:44.198 19:26:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:33:44.198 19:26:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:33:44.198 19:26:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.198 19:26:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:33:44.456 19:27:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:33:44.456 19:27:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:44.456 19:27:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:44.725 [2024-04-18 19:27:00.491826] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:44.725 [2024-04-18 19:27:00.491950] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:44.725 [2024-04-18 19:27:00.602050] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:44.725 [2024-04-18 19:27:00.602181] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:44.725 [2024-04-18 19:27:00.602193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:33:44.725 19:27:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:33:44.725 19:27:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:33:44.725 19:27:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.725 19:27:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:33:44.983 19:27:00 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:33:44.983 19:27:00 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:33:44.983 19:27:00 -- bdev/bdev_raid.sh@287 -- # killprocess 126945 00:33:44.983 19:27:00 -- common/autotest_common.sh@936 -- # '[' -z 126945 ']' 00:33:44.983 19:27:00 -- common/autotest_common.sh@940 -- # kill -0 126945 00:33:44.983 19:27:00 -- common/autotest_common.sh@941 -- # uname 00:33:44.983 19:27:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:44.983 19:27:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126945 00:33:45.241 killing process with pid 126945 00:33:45.241 19:27:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:45.241 19:27:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:45.241 19:27:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126945' 00:33:45.241 19:27:00 -- common/autotest_common.sh@955 -- # kill 126945 00:33:45.241 19:27:00 -- common/autotest_common.sh@960 -- # wait 126945 00:33:45.241 [2024-04-18 19:27:00.918434] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:45.242 [2024-04-18 19:27:00.918590] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:46.616 ************************************ 00:33:46.616 END TEST raid_state_function_test_sb 00:33:46.616 ************************************ 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:33:46.616 00:33:46.616 real 0m15.456s 00:33:46.616 user 0m27.017s 00:33:46.616 sys 0m1.978s 00:33:46.616 19:27:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:46.616 19:27:02 -- common/autotest_common.sh@10 -- # set +x 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:33:46.616 19:27:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:33:46.616 19:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:46.616 19:27:02 -- common/autotest_common.sh@10 -- # set +x 00:33:46.616 ************************************ 00:33:46.616 START TEST raid_superblock_test 00:33:46.616 ************************************ 00:33:46.616 19:27:02 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 3 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@357 -- # raid_pid=127393 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127393 /var/tmp/spdk-raid.sock 00:33:46.616 19:27:02 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:46.616 19:27:02 -- common/autotest_common.sh@817 -- # '[' -z 127393 ']' 00:33:46.616 19:27:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:46.616 19:27:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:46.616 19:27:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:46.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:46.616 19:27:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:46.616 19:27:02 -- common/autotest_common.sh@10 -- # set +x 00:33:46.616 [2024-04-18 19:27:02.446360] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:33:46.616 [2024-04-18 19:27:02.446744] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127393 ] 00:33:46.874 [2024-04-18 19:27:02.616789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.133 [2024-04-18 19:27:02.849891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.391 [2024-04-18 19:27:03.062406] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:47.650 19:27:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:47.650 19:27:03 -- common/autotest_common.sh@850 -- # return 0 00:33:47.650 19:27:03 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:33:47.650 19:27:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:33:47.650 19:27:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:33:47.650 19:27:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:33:47.650 19:27:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:47.650 19:27:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:47.650 19:27:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:33:47.650 19:27:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:47.650 19:27:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:33:47.908 malloc1 00:33:47.908 19:27:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:48.166 [2024-04-18 19:27:03.907914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:48.166 [2024-04-18 19:27:03.908019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:48.166 [2024-04-18 19:27:03.908053] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:33:48.166 [2024-04-18 19:27:03.908107] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:48.166 [2024-04-18 19:27:03.910644] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:48.166 [2024-04-18 19:27:03.910700] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:48.166 pt1 00:33:48.166 19:27:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:33:48.166 19:27:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:33:48.166 19:27:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:33:48.166 19:27:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:33:48.166 19:27:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:48.166 19:27:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:48.166 19:27:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:33:48.166 19:27:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:48.166 19:27:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:33:48.425 malloc2 00:33:48.425 19:27:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:48.684 [2024-04-18 19:27:04.534394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:48.684 [2024-04-18 19:27:04.534480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:48.684 [2024-04-18 19:27:04.534524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:33:48.684 [2024-04-18 19:27:04.534580] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:48.684 [2024-04-18 19:27:04.537165] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:48.684 [2024-04-18 19:27:04.537221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:48.684 pt2 00:33:48.684 19:27:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:33:48.684 19:27:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:33:48.684 19:27:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:33:48.684 19:27:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:33:48.684 19:27:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:33:48.684 19:27:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:48.684 19:27:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:33:48.684 19:27:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:48.684 19:27:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:33:48.943 malloc3 00:33:48.943 19:27:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:49.510 [2024-04-18 19:27:05.135155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:49.510 [2024-04-18 19:27:05.135245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:49.510 [2024-04-18 19:27:05.135286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:49.510 [2024-04-18 19:27:05.135329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:49.510 [2024-04-18 19:27:05.137833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:49.510 [2024-04-18 19:27:05.137893] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:49.510 pt3 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:33:49.510 [2024-04-18 19:27:05.419234] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:49.510 [2024-04-18 19:27:05.421459] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:49.510 [2024-04-18 19:27:05.421529] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:49.510 [2024-04-18 19:27:05.421720] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:33:49.510 [2024-04-18 19:27:05.421738] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:49.510 [2024-04-18 19:27:05.421887] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:33:49.510 [2024-04-18 19:27:05.422267] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:33:49.510 [2024-04-18 19:27:05.422288] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:33:49.510 [2024-04-18 19:27:05.422440] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:49.510 19:27:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:49.769 19:27:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.769 19:27:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.027 19:27:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:50.027 "name": "raid_bdev1", 00:33:50.027 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:33:50.027 "strip_size_kb": 0, 00:33:50.027 "state": "online", 00:33:50.027 "raid_level": "raid1", 00:33:50.027 "superblock": true, 00:33:50.027 "num_base_bdevs": 3, 00:33:50.027 "num_base_bdevs_discovered": 3, 00:33:50.027 "num_base_bdevs_operational": 3, 00:33:50.027 "base_bdevs_list": [ 00:33:50.027 { 00:33:50.027 "name": "pt1", 00:33:50.027 "uuid": "1c8195e0-fde5-583c-a734-b28a6ec07ccf", 00:33:50.027 "is_configured": true, 00:33:50.027 "data_offset": 2048, 00:33:50.027 "data_size": 63488 00:33:50.027 }, 00:33:50.027 { 00:33:50.027 "name": "pt2", 00:33:50.027 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:33:50.027 "is_configured": true, 00:33:50.027 "data_offset": 2048, 00:33:50.027 "data_size": 63488 00:33:50.027 }, 00:33:50.027 { 00:33:50.027 "name": "pt3", 00:33:50.027 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:33:50.027 "is_configured": true, 00:33:50.027 "data_offset": 2048, 00:33:50.027 "data_size": 63488 00:33:50.027 } 00:33:50.027 ] 00:33:50.027 }' 00:33:50.027 19:27:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:50.027 19:27:05 -- common/autotest_common.sh@10 -- # set +x 00:33:50.599 19:27:06 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:50.599 19:27:06 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:33:50.857 [2024-04-18 19:27:06.743747] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:50.857 19:27:06 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=9d490928-795e-4e03-b52a-d0185bb74d87 00:33:50.857 19:27:06 -- bdev/bdev_raid.sh@380 -- # '[' -z 9d490928-795e-4e03-b52a-d0185bb74d87 ']' 00:33:50.857 19:27:06 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:51.115 [2024-04-18 19:27:06.963576] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:51.115 [2024-04-18 19:27:06.963616] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:51.115 [2024-04-18 19:27:06.963697] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:51.115 [2024-04-18 19:27:06.963773] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:51.115 [2024-04-18 19:27:06.963783] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:33:51.115 19:27:06 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.115 19:27:06 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:33:51.373 19:27:07 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:33:51.373 19:27:07 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:33:51.373 19:27:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:33:51.373 19:27:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:51.631 19:27:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:33:51.631 19:27:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:51.890 19:27:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:33:51.890 19:27:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:52.148 19:27:07 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:52.148 19:27:07 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:52.406 19:27:08 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:33:52.406 19:27:08 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:33:52.406 19:27:08 -- common/autotest_common.sh@638 -- # local es=0 00:33:52.406 19:27:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:33:52.406 19:27:08 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:52.406 19:27:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:52.406 19:27:08 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:52.406 19:27:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:52.406 19:27:08 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:52.406 19:27:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:52.406 19:27:08 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:52.406 19:27:08 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:52.406 19:27:08 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:33:52.687 [2024-04-18 19:27:08.475864] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:52.687 [2024-04-18 19:27:08.478060] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:52.687 [2024-04-18 19:27:08.478132] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:33:52.687 [2024-04-18 19:27:08.478178] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:33:52.687 [2024-04-18 19:27:08.478247] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:33:52.687 [2024-04-18 19:27:08.478275] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:33:52.687 [2024-04-18 19:27:08.478332] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:52.687 [2024-04-18 19:27:08.478366] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:33:52.687 request: 00:33:52.687 { 00:33:52.687 "name": "raid_bdev1", 00:33:52.687 "raid_level": "raid1", 00:33:52.687 "base_bdevs": [ 00:33:52.687 "malloc1", 00:33:52.687 "malloc2", 00:33:52.687 "malloc3" 00:33:52.687 ], 00:33:52.687 "superblock": false, 00:33:52.687 "method": "bdev_raid_create", 00:33:52.687 "req_id": 1 00:33:52.687 } 00:33:52.687 Got JSON-RPC error response 00:33:52.687 response: 00:33:52.687 { 00:33:52.687 "code": -17, 00:33:52.687 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:52.687 } 00:33:52.687 19:27:08 -- common/autotest_common.sh@641 -- # es=1 00:33:52.687 19:27:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:52.687 19:27:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:52.687 19:27:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:52.687 19:27:08 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.687 19:27:08 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:33:52.959 19:27:08 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:33:52.959 19:27:08 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:33:52.959 19:27:08 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:53.217 [2024-04-18 19:27:08.895895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:53.217 [2024-04-18 19:27:08.895989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:53.217 [2024-04-18 19:27:08.896030] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:53.217 [2024-04-18 19:27:08.896051] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:53.217 [2024-04-18 19:27:08.898587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:53.217 [2024-04-18 19:27:08.898644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:53.217 [2024-04-18 19:27:08.898790] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:33:53.217 [2024-04-18 19:27:08.898847] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:53.217 pt1 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.217 19:27:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.476 19:27:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:53.476 "name": "raid_bdev1", 00:33:53.476 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:33:53.476 "strip_size_kb": 0, 00:33:53.476 "state": "configuring", 00:33:53.476 "raid_level": "raid1", 00:33:53.476 "superblock": true, 00:33:53.476 "num_base_bdevs": 3, 00:33:53.476 "num_base_bdevs_discovered": 1, 00:33:53.476 "num_base_bdevs_operational": 3, 00:33:53.476 "base_bdevs_list": [ 00:33:53.476 { 00:33:53.476 "name": "pt1", 00:33:53.476 "uuid": "1c8195e0-fde5-583c-a734-b28a6ec07ccf", 00:33:53.476 "is_configured": true, 00:33:53.476 "data_offset": 2048, 00:33:53.476 "data_size": 63488 00:33:53.476 }, 00:33:53.476 { 00:33:53.476 "name": null, 00:33:53.476 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:33:53.476 "is_configured": false, 00:33:53.476 "data_offset": 2048, 00:33:53.476 "data_size": 63488 00:33:53.476 }, 00:33:53.476 { 00:33:53.476 "name": null, 00:33:53.476 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:33:53.476 "is_configured": false, 00:33:53.476 "data_offset": 2048, 00:33:53.476 "data_size": 63488 00:33:53.476 } 00:33:53.476 ] 00:33:53.476 }' 00:33:53.476 19:27:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:53.476 19:27:09 -- common/autotest_common.sh@10 -- # set +x 00:33:54.042 19:27:09 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:33:54.042 19:27:09 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:54.300 [2024-04-18 19:27:10.128199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:54.300 [2024-04-18 19:27:10.128313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:54.300 [2024-04-18 19:27:10.128362] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:54.300 [2024-04-18 19:27:10.128384] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:54.300 [2024-04-18 19:27:10.128877] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:54.300 [2024-04-18 19:27:10.128917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:54.300 [2024-04-18 19:27:10.129056] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:33:54.300 [2024-04-18 19:27:10.129082] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:54.300 pt2 00:33:54.300 19:27:10 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:54.558 [2024-04-18 19:27:10.404312] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.558 19:27:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.815 19:27:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:54.815 "name": "raid_bdev1", 00:33:54.815 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:33:54.816 "strip_size_kb": 0, 00:33:54.816 "state": "configuring", 00:33:54.816 "raid_level": "raid1", 00:33:54.816 "superblock": true, 00:33:54.816 "num_base_bdevs": 3, 00:33:54.816 "num_base_bdevs_discovered": 1, 00:33:54.816 "num_base_bdevs_operational": 3, 00:33:54.816 "base_bdevs_list": [ 00:33:54.816 { 00:33:54.816 "name": "pt1", 00:33:54.816 "uuid": "1c8195e0-fde5-583c-a734-b28a6ec07ccf", 00:33:54.816 "is_configured": true, 00:33:54.816 "data_offset": 2048, 00:33:54.816 "data_size": 63488 00:33:54.816 }, 00:33:54.816 { 00:33:54.816 "name": null, 00:33:54.816 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:33:54.816 "is_configured": false, 00:33:54.816 "data_offset": 2048, 00:33:54.816 "data_size": 63488 00:33:54.816 }, 00:33:54.816 { 00:33:54.816 "name": null, 00:33:54.816 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:33:54.816 "is_configured": false, 00:33:54.816 "data_offset": 2048, 00:33:54.816 "data_size": 63488 00:33:54.816 } 00:33:54.816 ] 00:33:54.816 }' 00:33:54.816 19:27:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:54.816 19:27:10 -- common/autotest_common.sh@10 -- # set +x 00:33:55.790 19:27:11 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:33:55.790 19:27:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:33:55.790 19:27:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:55.790 [2024-04-18 19:27:11.704608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:55.790 [2024-04-18 19:27:11.704720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:55.790 [2024-04-18 19:27:11.704778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:55.790 [2024-04-18 19:27:11.704807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:55.790 [2024-04-18 19:27:11.705296] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:55.790 [2024-04-18 19:27:11.705346] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:55.790 [2024-04-18 19:27:11.705475] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:33:55.790 [2024-04-18 19:27:11.705501] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:55.790 pt2 00:33:56.048 19:27:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:33:56.048 19:27:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:33:56.048 19:27:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:56.307 [2024-04-18 19:27:11.996700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:56.307 [2024-04-18 19:27:11.996800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:56.307 [2024-04-18 19:27:11.996840] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:33:56.307 [2024-04-18 19:27:11.996869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:56.307 [2024-04-18 19:27:11.997360] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:56.307 [2024-04-18 19:27:11.997406] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:56.307 [2024-04-18 19:27:11.997542] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:33:56.307 [2024-04-18 19:27:11.997568] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:56.307 [2024-04-18 19:27:11.997702] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:33:56.307 [2024-04-18 19:27:11.997714] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:56.307 [2024-04-18 19:27:11.997835] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:56.307 [2024-04-18 19:27:11.998169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:33:56.307 [2024-04-18 19:27:11.998190] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:33:56.307 [2024-04-18 19:27:11.998325] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:56.307 pt3 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.307 19:27:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.566 19:27:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:56.566 "name": "raid_bdev1", 00:33:56.566 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:33:56.566 "strip_size_kb": 0, 00:33:56.566 "state": "online", 00:33:56.566 "raid_level": "raid1", 00:33:56.566 "superblock": true, 00:33:56.566 "num_base_bdevs": 3, 00:33:56.566 "num_base_bdevs_discovered": 3, 00:33:56.566 "num_base_bdevs_operational": 3, 00:33:56.566 "base_bdevs_list": [ 00:33:56.566 { 00:33:56.566 "name": "pt1", 00:33:56.566 "uuid": "1c8195e0-fde5-583c-a734-b28a6ec07ccf", 00:33:56.566 "is_configured": true, 00:33:56.566 "data_offset": 2048, 00:33:56.566 "data_size": 63488 00:33:56.566 }, 00:33:56.566 { 00:33:56.566 "name": "pt2", 00:33:56.566 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:33:56.566 "is_configured": true, 00:33:56.566 "data_offset": 2048, 00:33:56.566 "data_size": 63488 00:33:56.566 }, 00:33:56.566 { 00:33:56.566 "name": "pt3", 00:33:56.566 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:33:56.566 "is_configured": true, 00:33:56.566 "data_offset": 2048, 00:33:56.566 "data_size": 63488 00:33:56.566 } 00:33:56.566 ] 00:33:56.566 }' 00:33:56.566 19:27:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:56.566 19:27:12 -- common/autotest_common.sh@10 -- # set +x 00:33:57.133 19:27:13 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:57.133 19:27:13 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:33:57.699 [2024-04-18 19:27:13.345293] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:57.699 19:27:13 -- bdev/bdev_raid.sh@430 -- # '[' 9d490928-795e-4e03-b52a-d0185bb74d87 '!=' 9d490928-795e-4e03-b52a-d0185bb74d87 ']' 00:33:57.699 19:27:13 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:33:57.699 19:27:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:33:57.699 19:27:13 -- bdev/bdev_raid.sh@196 -- # return 0 00:33:57.699 19:27:13 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:57.699 [2024-04-18 19:27:13.617087] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.957 19:27:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:58.215 19:27:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:33:58.215 "name": "raid_bdev1", 00:33:58.215 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:33:58.215 "strip_size_kb": 0, 00:33:58.215 "state": "online", 00:33:58.215 "raid_level": "raid1", 00:33:58.215 "superblock": true, 00:33:58.215 "num_base_bdevs": 3, 00:33:58.215 "num_base_bdevs_discovered": 2, 00:33:58.215 "num_base_bdevs_operational": 2, 00:33:58.215 "base_bdevs_list": [ 00:33:58.215 { 00:33:58.215 "name": null, 00:33:58.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:58.215 "is_configured": false, 00:33:58.215 "data_offset": 2048, 00:33:58.215 "data_size": 63488 00:33:58.215 }, 00:33:58.215 { 00:33:58.215 "name": "pt2", 00:33:58.215 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:33:58.215 "is_configured": true, 00:33:58.215 "data_offset": 2048, 00:33:58.215 "data_size": 63488 00:33:58.215 }, 00:33:58.215 { 00:33:58.215 "name": "pt3", 00:33:58.215 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:33:58.215 "is_configured": true, 00:33:58.215 "data_offset": 2048, 00:33:58.215 "data_size": 63488 00:33:58.215 } 00:33:58.215 ] 00:33:58.215 }' 00:33:58.215 19:27:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:33:58.215 19:27:13 -- common/autotest_common.sh@10 -- # set +x 00:33:58.782 19:27:14 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:59.348 [2024-04-18 19:27:15.065381] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:59.348 [2024-04-18 19:27:15.065432] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:59.348 [2024-04-18 19:27:15.065511] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:59.348 [2024-04-18 19:27:15.065583] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:59.348 [2024-04-18 19:27:15.065597] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:33:59.348 19:27:15 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:59.348 19:27:15 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:33:59.605 19:27:15 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:33:59.605 19:27:15 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:33:59.605 19:27:15 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:33:59.605 19:27:15 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:33:59.605 19:27:15 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:00.171 19:27:15 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:34:00.171 19:27:15 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:34:00.171 19:27:15 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:00.470 19:27:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:34:00.470 19:27:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:34:00.470 19:27:16 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:34:00.470 19:27:16 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:34:00.470 19:27:16 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:00.728 [2024-04-18 19:27:16.519797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:00.728 [2024-04-18 19:27:16.519940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:00.728 [2024-04-18 19:27:16.520003] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:34:00.728 [2024-04-18 19:27:16.520048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:00.728 [2024-04-18 19:27:16.523847] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:00.728 [2024-04-18 19:27:16.523938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:00.728 [2024-04-18 19:27:16.524135] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:00.728 [2024-04-18 19:27:16.524245] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:00.728 pt2 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.728 19:27:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.294 19:27:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:01.294 "name": "raid_bdev1", 00:34:01.294 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:34:01.294 "strip_size_kb": 0, 00:34:01.294 "state": "configuring", 00:34:01.294 "raid_level": "raid1", 00:34:01.294 "superblock": true, 00:34:01.294 "num_base_bdevs": 3, 00:34:01.294 "num_base_bdevs_discovered": 1, 00:34:01.294 "num_base_bdevs_operational": 2, 00:34:01.294 "base_bdevs_list": [ 00:34:01.294 { 00:34:01.294 "name": null, 00:34:01.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:01.294 "is_configured": false, 00:34:01.294 "data_offset": 2048, 00:34:01.294 "data_size": 63488 00:34:01.294 }, 00:34:01.294 { 00:34:01.294 "name": "pt2", 00:34:01.294 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:34:01.294 "is_configured": true, 00:34:01.294 "data_offset": 2048, 00:34:01.294 "data_size": 63488 00:34:01.294 }, 00:34:01.294 { 00:34:01.294 "name": null, 00:34:01.294 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:34:01.294 "is_configured": false, 00:34:01.294 "data_offset": 2048, 00:34:01.294 "data_size": 63488 00:34:01.294 } 00:34:01.294 ] 00:34:01.294 }' 00:34:01.294 19:27:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:01.294 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:34:01.859 19:27:17 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:34:01.859 19:27:17 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:34:01.859 19:27:17 -- bdev/bdev_raid.sh@462 -- # i=2 00:34:01.859 19:27:17 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:02.116 [2024-04-18 19:27:17.884553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:02.116 [2024-04-18 19:27:17.884661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.116 [2024-04-18 19:27:17.884710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:02.117 [2024-04-18 19:27:17.884738] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.117 [2024-04-18 19:27:17.885232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.117 [2024-04-18 19:27:17.885272] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:02.117 [2024-04-18 19:27:17.885404] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:34:02.117 [2024-04-18 19:27:17.885430] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:02.117 [2024-04-18 19:27:17.885546] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:34:02.117 [2024-04-18 19:27:17.885557] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:02.117 [2024-04-18 19:27:17.885670] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:02.117 [2024-04-18 19:27:17.886029] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:34:02.117 [2024-04-18 19:27:17.886051] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:34:02.117 [2024-04-18 19:27:17.886181] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:02.117 pt3 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.117 19:27:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.374 19:27:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:02.374 "name": "raid_bdev1", 00:34:02.374 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:34:02.374 "strip_size_kb": 0, 00:34:02.374 "state": "online", 00:34:02.375 "raid_level": "raid1", 00:34:02.375 "superblock": true, 00:34:02.375 "num_base_bdevs": 3, 00:34:02.375 "num_base_bdevs_discovered": 2, 00:34:02.375 "num_base_bdevs_operational": 2, 00:34:02.375 "base_bdevs_list": [ 00:34:02.375 { 00:34:02.375 "name": null, 00:34:02.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.375 "is_configured": false, 00:34:02.375 "data_offset": 2048, 00:34:02.375 "data_size": 63488 00:34:02.375 }, 00:34:02.375 { 00:34:02.375 "name": "pt2", 00:34:02.375 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:34:02.375 "is_configured": true, 00:34:02.375 "data_offset": 2048, 00:34:02.375 "data_size": 63488 00:34:02.375 }, 00:34:02.375 { 00:34:02.375 "name": "pt3", 00:34:02.375 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:34:02.375 "is_configured": true, 00:34:02.375 "data_offset": 2048, 00:34:02.375 "data_size": 63488 00:34:02.375 } 00:34:02.375 ] 00:34:02.375 }' 00:34:02.375 19:27:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:02.375 19:27:18 -- common/autotest_common.sh@10 -- # set +x 00:34:02.940 19:27:18 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:34:02.940 19:27:18 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:03.197 [2024-04-18 19:27:19.080839] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:03.197 [2024-04-18 19:27:19.080883] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:03.197 [2024-04-18 19:27:19.080956] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:03.197 [2024-04-18 19:27:19.081019] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:03.197 [2024-04-18 19:27:19.081030] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:34:03.197 19:27:19 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.197 19:27:19 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:34:03.763 19:27:19 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:34:03.763 19:27:19 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:34:03.763 19:27:19 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:04.020 [2024-04-18 19:27:19.707697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:04.020 [2024-04-18 19:27:19.707828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.020 [2024-04-18 19:27:19.707884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:34:04.020 [2024-04-18 19:27:19.707913] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.020 [2024-04-18 19:27:19.711153] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.020 [2024-04-18 19:27:19.711227] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:04.020 [2024-04-18 19:27:19.711410] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:34:04.020 [2024-04-18 19:27:19.711475] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:04.020 pt1 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.020 19:27:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.278 19:27:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:04.278 "name": "raid_bdev1", 00:34:04.278 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:34:04.278 "strip_size_kb": 0, 00:34:04.278 "state": "configuring", 00:34:04.278 "raid_level": "raid1", 00:34:04.278 "superblock": true, 00:34:04.278 "num_base_bdevs": 3, 00:34:04.278 "num_base_bdevs_discovered": 1, 00:34:04.278 "num_base_bdevs_operational": 3, 00:34:04.278 "base_bdevs_list": [ 00:34:04.278 { 00:34:04.278 "name": "pt1", 00:34:04.278 "uuid": "1c8195e0-fde5-583c-a734-b28a6ec07ccf", 00:34:04.278 "is_configured": true, 00:34:04.278 "data_offset": 2048, 00:34:04.278 "data_size": 63488 00:34:04.278 }, 00:34:04.278 { 00:34:04.278 "name": null, 00:34:04.278 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:34:04.278 "is_configured": false, 00:34:04.278 "data_offset": 2048, 00:34:04.278 "data_size": 63488 00:34:04.278 }, 00:34:04.278 { 00:34:04.278 "name": null, 00:34:04.278 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:34:04.278 "is_configured": false, 00:34:04.278 "data_offset": 2048, 00:34:04.278 "data_size": 63488 00:34:04.278 } 00:34:04.278 ] 00:34:04.278 }' 00:34:04.278 19:27:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:04.278 19:27:19 -- common/autotest_common.sh@10 -- # set +x 00:34:04.843 19:27:20 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:34:04.843 19:27:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:34:04.843 19:27:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:05.101 19:27:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:34:05.101 19:27:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:34:05.101 19:27:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:05.358 19:27:21 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:34:05.358 19:27:21 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:34:05.358 19:27:21 -- bdev/bdev_raid.sh@489 -- # i=2 00:34:05.358 19:27:21 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:05.620 [2024-04-18 19:27:21.352090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:05.620 [2024-04-18 19:27:21.352192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:05.620 [2024-04-18 19:27:21.352226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:34:05.620 [2024-04-18 19:27:21.352261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:05.620 [2024-04-18 19:27:21.352869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:05.620 [2024-04-18 19:27:21.352927] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:05.620 [2024-04-18 19:27:21.353066] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:34:05.620 [2024-04-18 19:27:21.353084] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:05.620 [2024-04-18 19:27:21.353093] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:05.620 [2024-04-18 19:27:21.353116] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:34:05.620 [2024-04-18 19:27:21.353237] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:05.620 pt3 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.620 19:27:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.878 19:27:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:05.878 "name": "raid_bdev1", 00:34:05.878 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:34:05.878 "strip_size_kb": 0, 00:34:05.878 "state": "configuring", 00:34:05.878 "raid_level": "raid1", 00:34:05.878 "superblock": true, 00:34:05.878 "num_base_bdevs": 3, 00:34:05.878 "num_base_bdevs_discovered": 1, 00:34:05.878 "num_base_bdevs_operational": 2, 00:34:05.878 "base_bdevs_list": [ 00:34:05.878 { 00:34:05.878 "name": null, 00:34:05.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.878 "is_configured": false, 00:34:05.878 "data_offset": 2048, 00:34:05.878 "data_size": 63488 00:34:05.878 }, 00:34:05.878 { 00:34:05.878 "name": null, 00:34:05.878 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:34:05.878 "is_configured": false, 00:34:05.878 "data_offset": 2048, 00:34:05.878 "data_size": 63488 00:34:05.878 }, 00:34:05.878 { 00:34:05.878 "name": "pt3", 00:34:05.878 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:34:05.878 "is_configured": true, 00:34:05.878 "data_offset": 2048, 00:34:05.878 "data_size": 63488 00:34:05.878 } 00:34:05.878 ] 00:34:05.878 }' 00:34:05.878 19:27:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:05.878 19:27:21 -- common/autotest_common.sh@10 -- # set +x 00:34:06.446 19:27:22 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:34:06.446 19:27:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:34:06.447 19:27:22 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:06.705 [2024-04-18 19:27:22.500338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:06.705 [2024-04-18 19:27:22.500447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:06.705 [2024-04-18 19:27:22.500497] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:34:06.705 [2024-04-18 19:27:22.500532] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:06.705 [2024-04-18 19:27:22.501044] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:06.705 [2024-04-18 19:27:22.501094] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:06.705 [2024-04-18 19:27:22.501224] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:06.705 [2024-04-18 19:27:22.501249] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:06.705 [2024-04-18 19:27:22.501368] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:34:06.705 [2024-04-18 19:27:22.501379] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:06.705 [2024-04-18 19:27:22.501496] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:06.705 [2024-04-18 19:27:22.501831] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:34:06.705 [2024-04-18 19:27:22.501853] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:34:06.705 [2024-04-18 19:27:22.502006] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:06.705 pt2 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:06.705 19:27:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.963 19:27:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:06.963 "name": "raid_bdev1", 00:34:06.963 "uuid": "9d490928-795e-4e03-b52a-d0185bb74d87", 00:34:06.963 "strip_size_kb": 0, 00:34:06.963 "state": "online", 00:34:06.963 "raid_level": "raid1", 00:34:06.963 "superblock": true, 00:34:06.963 "num_base_bdevs": 3, 00:34:06.963 "num_base_bdevs_discovered": 2, 00:34:06.963 "num_base_bdevs_operational": 2, 00:34:06.963 "base_bdevs_list": [ 00:34:06.963 { 00:34:06.963 "name": null, 00:34:06.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.963 "is_configured": false, 00:34:06.963 "data_offset": 2048, 00:34:06.963 "data_size": 63488 00:34:06.963 }, 00:34:06.963 { 00:34:06.963 "name": "pt2", 00:34:06.963 "uuid": "bfe9cc90-1919-56e2-b674-a3edd647f337", 00:34:06.963 "is_configured": true, 00:34:06.963 "data_offset": 2048, 00:34:06.963 "data_size": 63488 00:34:06.963 }, 00:34:06.963 { 00:34:06.963 "name": "pt3", 00:34:06.963 "uuid": "a9bbf70a-8648-5ec3-897e-a1992301320a", 00:34:06.963 "is_configured": true, 00:34:06.963 "data_offset": 2048, 00:34:06.963 "data_size": 63488 00:34:06.963 } 00:34:06.963 ] 00:34:06.963 }' 00:34:06.963 19:27:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:06.963 19:27:22 -- common/autotest_common.sh@10 -- # set +x 00:34:07.531 19:27:23 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:07.531 19:27:23 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:34:07.800 [2024-04-18 19:27:23.668872] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:07.800 19:27:23 -- bdev/bdev_raid.sh@506 -- # '[' 9d490928-795e-4e03-b52a-d0185bb74d87 '!=' 9d490928-795e-4e03-b52a-d0185bb74d87 ']' 00:34:07.800 19:27:23 -- bdev/bdev_raid.sh@511 -- # killprocess 127393 00:34:07.800 19:27:23 -- common/autotest_common.sh@936 -- # '[' -z 127393 ']' 00:34:07.800 19:27:23 -- common/autotest_common.sh@940 -- # kill -0 127393 00:34:07.800 19:27:23 -- common/autotest_common.sh@941 -- # uname 00:34:07.800 19:27:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:07.800 19:27:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127393 00:34:07.800 killing process with pid 127393 00:34:07.800 19:27:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:07.800 19:27:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:07.800 19:27:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127393' 00:34:07.800 19:27:23 -- common/autotest_common.sh@955 -- # kill 127393 00:34:07.800 19:27:23 -- common/autotest_common.sh@960 -- # wait 127393 00:34:07.800 [2024-04-18 19:27:23.703849] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:07.800 [2024-04-18 19:27:23.703933] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:07.800 [2024-04-18 19:27:23.703995] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:07.800 [2024-04-18 19:27:23.704020] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:34:08.369 [2024-04-18 19:27:24.031251] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:09.744 ************************************ 00:34:09.744 END TEST raid_superblock_test 00:34:09.744 ************************************ 00:34:09.744 19:27:25 -- bdev/bdev_raid.sh@513 -- # return 0 00:34:09.744 00:34:09.744 real 0m23.155s 00:34:09.744 user 0m42.333s 00:34:09.744 sys 0m2.628s 00:34:09.744 19:27:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:09.744 19:27:25 -- common/autotest_common.sh@10 -- # set +x 00:34:09.744 19:27:25 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:34:09.744 19:27:25 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:34:09.744 19:27:25 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:34:09.744 19:27:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:34:09.744 19:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:09.744 19:27:25 -- common/autotest_common.sh@10 -- # set +x 00:34:09.744 ************************************ 00:34:09.745 START TEST raid_state_function_test 00:34:09.745 ************************************ 00:34:09.745 19:27:25 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 4 false 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=128104 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128104' 00:34:09.745 Process raid pid: 128104 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128104 /var/tmp/spdk-raid.sock 00:34:09.745 19:27:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:09.745 19:27:25 -- common/autotest_common.sh@817 -- # '[' -z 128104 ']' 00:34:09.745 19:27:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:09.745 19:27:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:09.745 19:27:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:09.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:09.745 19:27:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:09.745 19:27:25 -- common/autotest_common.sh@10 -- # set +x 00:34:10.004 [2024-04-18 19:27:25.698003] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:34:10.004 [2024-04-18 19:27:25.698336] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.004 [2024-04-18 19:27:25.862714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.263 [2024-04-18 19:27:26.074726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.528 [2024-04-18 19:27:26.299431] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:10.791 19:27:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:10.791 19:27:26 -- common/autotest_common.sh@850 -- # return 0 00:34:10.791 19:27:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:11.049 [2024-04-18 19:27:26.858338] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:11.049 [2024-04-18 19:27:26.858425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:11.049 [2024-04-18 19:27:26.858437] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:11.049 [2024-04-18 19:27:26.858476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:11.049 [2024-04-18 19:27:26.858484] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:11.049 [2024-04-18 19:27:26.858523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:11.049 [2024-04-18 19:27:26.858531] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:11.049 [2024-04-18 19:27:26.858556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.049 19:27:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:11.307 19:27:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:11.307 "name": "Existed_Raid", 00:34:11.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.307 "strip_size_kb": 64, 00:34:11.307 "state": "configuring", 00:34:11.307 "raid_level": "raid0", 00:34:11.307 "superblock": false, 00:34:11.307 "num_base_bdevs": 4, 00:34:11.307 "num_base_bdevs_discovered": 0, 00:34:11.307 "num_base_bdevs_operational": 4, 00:34:11.307 "base_bdevs_list": [ 00:34:11.307 { 00:34:11.307 "name": "BaseBdev1", 00:34:11.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.307 "is_configured": false, 00:34:11.307 "data_offset": 0, 00:34:11.307 "data_size": 0 00:34:11.307 }, 00:34:11.307 { 00:34:11.307 "name": "BaseBdev2", 00:34:11.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.307 "is_configured": false, 00:34:11.307 "data_offset": 0, 00:34:11.307 "data_size": 0 00:34:11.307 }, 00:34:11.307 { 00:34:11.307 "name": "BaseBdev3", 00:34:11.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.307 "is_configured": false, 00:34:11.307 "data_offset": 0, 00:34:11.307 "data_size": 0 00:34:11.307 }, 00:34:11.307 { 00:34:11.307 "name": "BaseBdev4", 00:34:11.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.307 "is_configured": false, 00:34:11.307 "data_offset": 0, 00:34:11.307 "data_size": 0 00:34:11.307 } 00:34:11.307 ] 00:34:11.307 }' 00:34:11.307 19:27:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:11.307 19:27:27 -- common/autotest_common.sh@10 -- # set +x 00:34:12.240 19:27:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:12.240 [2024-04-18 19:27:28.064315] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:12.240 [2024-04-18 19:27:28.064539] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:34:12.240 19:27:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:12.498 [2024-04-18 19:27:28.284392] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:12.498 [2024-04-18 19:27:28.284677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:12.498 [2024-04-18 19:27:28.284816] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:12.498 [2024-04-18 19:27:28.284877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:12.498 [2024-04-18 19:27:28.285010] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:12.498 [2024-04-18 19:27:28.285080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:12.498 [2024-04-18 19:27:28.285181] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:12.498 [2024-04-18 19:27:28.285234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:12.498 19:27:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:12.755 [2024-04-18 19:27:28.584055] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:12.755 BaseBdev1 00:34:12.755 19:27:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:34:12.755 19:27:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:34:12.755 19:27:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:12.755 19:27:28 -- common/autotest_common.sh@887 -- # local i 00:34:12.755 19:27:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:12.755 19:27:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:12.755 19:27:28 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:13.070 19:27:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:13.328 [ 00:34:13.328 { 00:34:13.328 "name": "BaseBdev1", 00:34:13.328 "aliases": [ 00:34:13.328 "30655085-54b1-42bc-812f-7020f4657fa7" 00:34:13.328 ], 00:34:13.328 "product_name": "Malloc disk", 00:34:13.328 "block_size": 512, 00:34:13.328 "num_blocks": 65536, 00:34:13.328 "uuid": "30655085-54b1-42bc-812f-7020f4657fa7", 00:34:13.328 "assigned_rate_limits": { 00:34:13.328 "rw_ios_per_sec": 0, 00:34:13.328 "rw_mbytes_per_sec": 0, 00:34:13.328 "r_mbytes_per_sec": 0, 00:34:13.328 "w_mbytes_per_sec": 0 00:34:13.328 }, 00:34:13.328 "claimed": true, 00:34:13.328 "claim_type": "exclusive_write", 00:34:13.328 "zoned": false, 00:34:13.328 "supported_io_types": { 00:34:13.328 "read": true, 00:34:13.328 "write": true, 00:34:13.328 "unmap": true, 00:34:13.328 "write_zeroes": true, 00:34:13.328 "flush": true, 00:34:13.328 "reset": true, 00:34:13.328 "compare": false, 00:34:13.328 "compare_and_write": false, 00:34:13.328 "abort": true, 00:34:13.328 "nvme_admin": false, 00:34:13.328 "nvme_io": false 00:34:13.328 }, 00:34:13.328 "memory_domains": [ 00:34:13.328 { 00:34:13.328 "dma_device_id": "system", 00:34:13.328 "dma_device_type": 1 00:34:13.328 }, 00:34:13.328 { 00:34:13.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:13.328 "dma_device_type": 2 00:34:13.328 } 00:34:13.328 ], 00:34:13.328 "driver_specific": {} 00:34:13.328 } 00:34:13.328 ] 00:34:13.328 19:27:29 -- common/autotest_common.sh@893 -- # return 0 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:13.328 19:27:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:13.586 19:27:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:13.586 "name": "Existed_Raid", 00:34:13.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.586 "strip_size_kb": 64, 00:34:13.586 "state": "configuring", 00:34:13.586 "raid_level": "raid0", 00:34:13.586 "superblock": false, 00:34:13.586 "num_base_bdevs": 4, 00:34:13.586 "num_base_bdevs_discovered": 1, 00:34:13.586 "num_base_bdevs_operational": 4, 00:34:13.586 "base_bdevs_list": [ 00:34:13.586 { 00:34:13.586 "name": "BaseBdev1", 00:34:13.586 "uuid": "30655085-54b1-42bc-812f-7020f4657fa7", 00:34:13.586 "is_configured": true, 00:34:13.586 "data_offset": 0, 00:34:13.586 "data_size": 65536 00:34:13.586 }, 00:34:13.586 { 00:34:13.586 "name": "BaseBdev2", 00:34:13.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.586 "is_configured": false, 00:34:13.586 "data_offset": 0, 00:34:13.586 "data_size": 0 00:34:13.586 }, 00:34:13.586 { 00:34:13.586 "name": "BaseBdev3", 00:34:13.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.586 "is_configured": false, 00:34:13.586 "data_offset": 0, 00:34:13.586 "data_size": 0 00:34:13.586 }, 00:34:13.586 { 00:34:13.586 "name": "BaseBdev4", 00:34:13.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.586 "is_configured": false, 00:34:13.586 "data_offset": 0, 00:34:13.586 "data_size": 0 00:34:13.586 } 00:34:13.586 ] 00:34:13.586 }' 00:34:13.586 19:27:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:13.586 19:27:29 -- common/autotest_common.sh@10 -- # set +x 00:34:14.153 19:27:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:14.411 [2024-04-18 19:27:30.204479] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:14.411 [2024-04-18 19:27:30.204749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:14.411 19:27:30 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:34:14.411 19:27:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:14.669 [2024-04-18 19:27:30.452581] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:14.669 [2024-04-18 19:27:30.454980] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:14.669 [2024-04-18 19:27:30.455227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:14.669 [2024-04-18 19:27:30.455350] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:14.669 [2024-04-18 19:27:30.455439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:14.669 [2024-04-18 19:27:30.455513] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:14.669 [2024-04-18 19:27:30.455638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:14.669 19:27:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:14.928 19:27:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:14.928 "name": "Existed_Raid", 00:34:14.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.928 "strip_size_kb": 64, 00:34:14.928 "state": "configuring", 00:34:14.928 "raid_level": "raid0", 00:34:14.928 "superblock": false, 00:34:14.928 "num_base_bdevs": 4, 00:34:14.928 "num_base_bdevs_discovered": 1, 00:34:14.928 "num_base_bdevs_operational": 4, 00:34:14.928 "base_bdevs_list": [ 00:34:14.928 { 00:34:14.928 "name": "BaseBdev1", 00:34:14.928 "uuid": "30655085-54b1-42bc-812f-7020f4657fa7", 00:34:14.928 "is_configured": true, 00:34:14.928 "data_offset": 0, 00:34:14.928 "data_size": 65536 00:34:14.928 }, 00:34:14.928 { 00:34:14.928 "name": "BaseBdev2", 00:34:14.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.928 "is_configured": false, 00:34:14.928 "data_offset": 0, 00:34:14.928 "data_size": 0 00:34:14.928 }, 00:34:14.928 { 00:34:14.928 "name": "BaseBdev3", 00:34:14.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.928 "is_configured": false, 00:34:14.928 "data_offset": 0, 00:34:14.928 "data_size": 0 00:34:14.928 }, 00:34:14.928 { 00:34:14.928 "name": "BaseBdev4", 00:34:14.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.928 "is_configured": false, 00:34:14.928 "data_offset": 0, 00:34:14.928 "data_size": 0 00:34:14.928 } 00:34:14.928 ] 00:34:14.928 }' 00:34:14.928 19:27:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:14.928 19:27:30 -- common/autotest_common.sh@10 -- # set +x 00:34:15.494 19:27:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:15.751 [2024-04-18 19:27:31.628475] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:15.751 BaseBdev2 00:34:15.751 19:27:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:34:15.751 19:27:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:34:15.751 19:27:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:15.751 19:27:31 -- common/autotest_common.sh@887 -- # local i 00:34:15.751 19:27:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:15.751 19:27:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:15.751 19:27:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:16.029 19:27:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:16.287 [ 00:34:16.287 { 00:34:16.287 "name": "BaseBdev2", 00:34:16.287 "aliases": [ 00:34:16.287 "45a29049-f839-4bbb-a02b-f804a9b4dca4" 00:34:16.287 ], 00:34:16.287 "product_name": "Malloc disk", 00:34:16.287 "block_size": 512, 00:34:16.287 "num_blocks": 65536, 00:34:16.287 "uuid": "45a29049-f839-4bbb-a02b-f804a9b4dca4", 00:34:16.287 "assigned_rate_limits": { 00:34:16.287 "rw_ios_per_sec": 0, 00:34:16.287 "rw_mbytes_per_sec": 0, 00:34:16.287 "r_mbytes_per_sec": 0, 00:34:16.287 "w_mbytes_per_sec": 0 00:34:16.287 }, 00:34:16.287 "claimed": true, 00:34:16.287 "claim_type": "exclusive_write", 00:34:16.287 "zoned": false, 00:34:16.287 "supported_io_types": { 00:34:16.287 "read": true, 00:34:16.287 "write": true, 00:34:16.287 "unmap": true, 00:34:16.287 "write_zeroes": true, 00:34:16.287 "flush": true, 00:34:16.287 "reset": true, 00:34:16.287 "compare": false, 00:34:16.287 "compare_and_write": false, 00:34:16.287 "abort": true, 00:34:16.287 "nvme_admin": false, 00:34:16.287 "nvme_io": false 00:34:16.287 }, 00:34:16.287 "memory_domains": [ 00:34:16.287 { 00:34:16.287 "dma_device_id": "system", 00:34:16.287 "dma_device_type": 1 00:34:16.287 }, 00:34:16.287 { 00:34:16.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.287 "dma_device_type": 2 00:34:16.287 } 00:34:16.287 ], 00:34:16.287 "driver_specific": {} 00:34:16.287 } 00:34:16.287 ] 00:34:16.287 19:27:32 -- common/autotest_common.sh@893 -- # return 0 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:16.287 19:27:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:16.545 19:27:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:16.545 "name": "Existed_Raid", 00:34:16.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.545 "strip_size_kb": 64, 00:34:16.545 "state": "configuring", 00:34:16.545 "raid_level": "raid0", 00:34:16.545 "superblock": false, 00:34:16.545 "num_base_bdevs": 4, 00:34:16.545 "num_base_bdevs_discovered": 2, 00:34:16.545 "num_base_bdevs_operational": 4, 00:34:16.545 "base_bdevs_list": [ 00:34:16.545 { 00:34:16.545 "name": "BaseBdev1", 00:34:16.545 "uuid": "30655085-54b1-42bc-812f-7020f4657fa7", 00:34:16.545 "is_configured": true, 00:34:16.545 "data_offset": 0, 00:34:16.545 "data_size": 65536 00:34:16.545 }, 00:34:16.545 { 00:34:16.545 "name": "BaseBdev2", 00:34:16.545 "uuid": "45a29049-f839-4bbb-a02b-f804a9b4dca4", 00:34:16.545 "is_configured": true, 00:34:16.545 "data_offset": 0, 00:34:16.545 "data_size": 65536 00:34:16.545 }, 00:34:16.545 { 00:34:16.545 "name": "BaseBdev3", 00:34:16.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.545 "is_configured": false, 00:34:16.545 "data_offset": 0, 00:34:16.545 "data_size": 0 00:34:16.545 }, 00:34:16.545 { 00:34:16.545 "name": "BaseBdev4", 00:34:16.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.545 "is_configured": false, 00:34:16.545 "data_offset": 0, 00:34:16.545 "data_size": 0 00:34:16.545 } 00:34:16.545 ] 00:34:16.545 }' 00:34:16.545 19:27:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:16.545 19:27:32 -- common/autotest_common.sh@10 -- # set +x 00:34:17.109 19:27:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:17.675 [2024-04-18 19:27:33.310708] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:17.675 BaseBdev3 00:34:17.675 19:27:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:34:17.675 19:27:33 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:34:17.675 19:27:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:17.675 19:27:33 -- common/autotest_common.sh@887 -- # local i 00:34:17.675 19:27:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:17.675 19:27:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:17.675 19:27:33 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:17.675 19:27:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:17.933 [ 00:34:17.933 { 00:34:17.933 "name": "BaseBdev3", 00:34:17.933 "aliases": [ 00:34:17.933 "0ebf5150-680a-41bc-89d0-7b424d9b685b" 00:34:17.933 ], 00:34:17.933 "product_name": "Malloc disk", 00:34:17.933 "block_size": 512, 00:34:17.933 "num_blocks": 65536, 00:34:17.933 "uuid": "0ebf5150-680a-41bc-89d0-7b424d9b685b", 00:34:17.933 "assigned_rate_limits": { 00:34:17.933 "rw_ios_per_sec": 0, 00:34:17.933 "rw_mbytes_per_sec": 0, 00:34:17.933 "r_mbytes_per_sec": 0, 00:34:17.933 "w_mbytes_per_sec": 0 00:34:17.933 }, 00:34:17.933 "claimed": true, 00:34:17.933 "claim_type": "exclusive_write", 00:34:17.933 "zoned": false, 00:34:17.933 "supported_io_types": { 00:34:17.933 "read": true, 00:34:17.933 "write": true, 00:34:17.933 "unmap": true, 00:34:17.933 "write_zeroes": true, 00:34:17.933 "flush": true, 00:34:17.933 "reset": true, 00:34:17.933 "compare": false, 00:34:17.933 "compare_and_write": false, 00:34:17.933 "abort": true, 00:34:17.933 "nvme_admin": false, 00:34:17.933 "nvme_io": false 00:34:17.933 }, 00:34:17.933 "memory_domains": [ 00:34:17.933 { 00:34:17.933 "dma_device_id": "system", 00:34:17.933 "dma_device_type": 1 00:34:17.933 }, 00:34:17.933 { 00:34:17.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.933 "dma_device_type": 2 00:34:17.933 } 00:34:17.933 ], 00:34:17.933 "driver_specific": {} 00:34:17.933 } 00:34:17.933 ] 00:34:17.933 19:27:33 -- common/autotest_common.sh@893 -- # return 0 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:17.933 19:27:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.191 19:27:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:18.191 "name": "Existed_Raid", 00:34:18.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.191 "strip_size_kb": 64, 00:34:18.191 "state": "configuring", 00:34:18.191 "raid_level": "raid0", 00:34:18.191 "superblock": false, 00:34:18.191 "num_base_bdevs": 4, 00:34:18.191 "num_base_bdevs_discovered": 3, 00:34:18.191 "num_base_bdevs_operational": 4, 00:34:18.191 "base_bdevs_list": [ 00:34:18.191 { 00:34:18.191 "name": "BaseBdev1", 00:34:18.191 "uuid": "30655085-54b1-42bc-812f-7020f4657fa7", 00:34:18.191 "is_configured": true, 00:34:18.191 "data_offset": 0, 00:34:18.191 "data_size": 65536 00:34:18.191 }, 00:34:18.191 { 00:34:18.191 "name": "BaseBdev2", 00:34:18.191 "uuid": "45a29049-f839-4bbb-a02b-f804a9b4dca4", 00:34:18.191 "is_configured": true, 00:34:18.191 "data_offset": 0, 00:34:18.191 "data_size": 65536 00:34:18.191 }, 00:34:18.191 { 00:34:18.191 "name": "BaseBdev3", 00:34:18.191 "uuid": "0ebf5150-680a-41bc-89d0-7b424d9b685b", 00:34:18.191 "is_configured": true, 00:34:18.191 "data_offset": 0, 00:34:18.191 "data_size": 65536 00:34:18.191 }, 00:34:18.191 { 00:34:18.191 "name": "BaseBdev4", 00:34:18.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.191 "is_configured": false, 00:34:18.191 "data_offset": 0, 00:34:18.191 "data_size": 0 00:34:18.191 } 00:34:18.191 ] 00:34:18.191 }' 00:34:18.191 19:27:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:18.191 19:27:34 -- common/autotest_common.sh@10 -- # set +x 00:34:19.133 19:27:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:19.392 [2024-04-18 19:27:35.130513] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:19.392 [2024-04-18 19:27:35.130565] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:34:19.392 [2024-04-18 19:27:35.130575] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:34:19.392 [2024-04-18 19:27:35.130733] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:34:19.392 [2024-04-18 19:27:35.131083] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:34:19.392 [2024-04-18 19:27:35.131109] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:34:19.392 [2024-04-18 19:27:35.131385] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:19.392 BaseBdev4 00:34:19.392 19:27:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:34:19.392 19:27:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:34:19.392 19:27:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:19.392 19:27:35 -- common/autotest_common.sh@887 -- # local i 00:34:19.392 19:27:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:19.392 19:27:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:19.392 19:27:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:19.650 19:27:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:19.909 [ 00:34:19.909 { 00:34:19.909 "name": "BaseBdev4", 00:34:19.909 "aliases": [ 00:34:19.909 "f09eb2a1-65a1-41d5-9b8b-764a547a7725" 00:34:19.909 ], 00:34:19.909 "product_name": "Malloc disk", 00:34:19.909 "block_size": 512, 00:34:19.909 "num_blocks": 65536, 00:34:19.909 "uuid": "f09eb2a1-65a1-41d5-9b8b-764a547a7725", 00:34:19.909 "assigned_rate_limits": { 00:34:19.909 "rw_ios_per_sec": 0, 00:34:19.909 "rw_mbytes_per_sec": 0, 00:34:19.909 "r_mbytes_per_sec": 0, 00:34:19.909 "w_mbytes_per_sec": 0 00:34:19.909 }, 00:34:19.909 "claimed": true, 00:34:19.909 "claim_type": "exclusive_write", 00:34:19.909 "zoned": false, 00:34:19.909 "supported_io_types": { 00:34:19.909 "read": true, 00:34:19.909 "write": true, 00:34:19.909 "unmap": true, 00:34:19.909 "write_zeroes": true, 00:34:19.909 "flush": true, 00:34:19.909 "reset": true, 00:34:19.909 "compare": false, 00:34:19.909 "compare_and_write": false, 00:34:19.909 "abort": true, 00:34:19.909 "nvme_admin": false, 00:34:19.909 "nvme_io": false 00:34:19.909 }, 00:34:19.909 "memory_domains": [ 00:34:19.909 { 00:34:19.909 "dma_device_id": "system", 00:34:19.909 "dma_device_type": 1 00:34:19.909 }, 00:34:19.909 { 00:34:19.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.909 "dma_device_type": 2 00:34:19.909 } 00:34:19.909 ], 00:34:19.909 "driver_specific": {} 00:34:19.909 } 00:34:19.909 ] 00:34:19.909 19:27:35 -- common/autotest_common.sh@893 -- # return 0 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:19.909 "name": "Existed_Raid", 00:34:19.909 "uuid": "3c8cb9be-d74e-4abe-a1e8-957a1ac604a8", 00:34:19.909 "strip_size_kb": 64, 00:34:19.909 "state": "online", 00:34:19.909 "raid_level": "raid0", 00:34:19.909 "superblock": false, 00:34:19.909 "num_base_bdevs": 4, 00:34:19.909 "num_base_bdevs_discovered": 4, 00:34:19.909 "num_base_bdevs_operational": 4, 00:34:19.909 "base_bdevs_list": [ 00:34:19.909 { 00:34:19.909 "name": "BaseBdev1", 00:34:19.909 "uuid": "30655085-54b1-42bc-812f-7020f4657fa7", 00:34:19.909 "is_configured": true, 00:34:19.909 "data_offset": 0, 00:34:19.909 "data_size": 65536 00:34:19.909 }, 00:34:19.909 { 00:34:19.909 "name": "BaseBdev2", 00:34:19.909 "uuid": "45a29049-f839-4bbb-a02b-f804a9b4dca4", 00:34:19.909 "is_configured": true, 00:34:19.909 "data_offset": 0, 00:34:19.909 "data_size": 65536 00:34:19.909 }, 00:34:19.909 { 00:34:19.909 "name": "BaseBdev3", 00:34:19.909 "uuid": "0ebf5150-680a-41bc-89d0-7b424d9b685b", 00:34:19.909 "is_configured": true, 00:34:19.909 "data_offset": 0, 00:34:19.909 "data_size": 65536 00:34:19.909 }, 00:34:19.909 { 00:34:19.909 "name": "BaseBdev4", 00:34:19.909 "uuid": "f09eb2a1-65a1-41d5-9b8b-764a547a7725", 00:34:19.909 "is_configured": true, 00:34:19.909 "data_offset": 0, 00:34:19.909 "data_size": 65536 00:34:19.909 } 00:34:19.909 ] 00:34:19.909 }' 00:34:19.909 19:27:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:19.909 19:27:35 -- common/autotest_common.sh@10 -- # set +x 00:34:20.844 19:27:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:20.844 [2024-04-18 19:27:36.723036] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:20.844 [2024-04-18 19:27:36.723088] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:20.844 [2024-04-18 19:27:36.723149] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.109 19:27:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:21.400 19:27:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:21.400 "name": "Existed_Raid", 00:34:21.400 "uuid": "3c8cb9be-d74e-4abe-a1e8-957a1ac604a8", 00:34:21.400 "strip_size_kb": 64, 00:34:21.400 "state": "offline", 00:34:21.400 "raid_level": "raid0", 00:34:21.400 "superblock": false, 00:34:21.400 "num_base_bdevs": 4, 00:34:21.400 "num_base_bdevs_discovered": 3, 00:34:21.400 "num_base_bdevs_operational": 3, 00:34:21.400 "base_bdevs_list": [ 00:34:21.400 { 00:34:21.400 "name": null, 00:34:21.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.400 "is_configured": false, 00:34:21.400 "data_offset": 0, 00:34:21.400 "data_size": 65536 00:34:21.400 }, 00:34:21.400 { 00:34:21.400 "name": "BaseBdev2", 00:34:21.400 "uuid": "45a29049-f839-4bbb-a02b-f804a9b4dca4", 00:34:21.400 "is_configured": true, 00:34:21.400 "data_offset": 0, 00:34:21.400 "data_size": 65536 00:34:21.400 }, 00:34:21.400 { 00:34:21.400 "name": "BaseBdev3", 00:34:21.400 "uuid": "0ebf5150-680a-41bc-89d0-7b424d9b685b", 00:34:21.400 "is_configured": true, 00:34:21.400 "data_offset": 0, 00:34:21.400 "data_size": 65536 00:34:21.400 }, 00:34:21.400 { 00:34:21.400 "name": "BaseBdev4", 00:34:21.400 "uuid": "f09eb2a1-65a1-41d5-9b8b-764a547a7725", 00:34:21.400 "is_configured": true, 00:34:21.400 "data_offset": 0, 00:34:21.400 "data_size": 65536 00:34:21.400 } 00:34:21.400 ] 00:34:21.400 }' 00:34:21.400 19:27:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:21.400 19:27:37 -- common/autotest_common.sh@10 -- # set +x 00:34:21.967 19:27:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:34:21.967 19:27:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:21.967 19:27:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.967 19:27:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:22.225 19:27:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:22.225 19:27:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:22.225 19:27:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:22.791 [2024-04-18 19:27:38.465401] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:22.791 19:27:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:22.791 19:27:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:22.791 19:27:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.791 19:27:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:23.049 19:27:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:23.049 19:27:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:23.049 19:27:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:23.307 [2024-04-18 19:27:39.159404] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:23.566 19:27:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:23.566 19:27:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:23.566 19:27:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:23.566 19:27:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.824 19:27:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:23.824 19:27:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:23.824 19:27:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:34:24.082 [2024-04-18 19:27:39.819144] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:34:24.082 [2024-04-18 19:27:39.819214] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:34:24.082 19:27:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:24.082 19:27:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:24.082 19:27:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.082 19:27:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:34:24.649 19:27:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:34:24.649 19:27:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:34:24.649 19:27:40 -- bdev/bdev_raid.sh@287 -- # killprocess 128104 00:34:24.649 19:27:40 -- common/autotest_common.sh@936 -- # '[' -z 128104 ']' 00:34:24.649 19:27:40 -- common/autotest_common.sh@940 -- # kill -0 128104 00:34:24.649 19:27:40 -- common/autotest_common.sh@941 -- # uname 00:34:24.649 19:27:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:24.649 19:27:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128104 00:34:24.649 killing process with pid 128104 00:34:24.649 19:27:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:24.649 19:27:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:24.649 19:27:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128104' 00:34:24.649 19:27:40 -- common/autotest_common.sh@955 -- # kill 128104 00:34:24.649 19:27:40 -- common/autotest_common.sh@960 -- # wait 128104 00:34:24.649 [2024-04-18 19:27:40.317866] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:24.649 [2024-04-18 19:27:40.318044] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:26.025 ************************************ 00:34:26.025 END TEST raid_state_function_test 00:34:26.025 ************************************ 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:34:26.025 00:34:26.025 real 0m16.202s 00:34:26.025 user 0m28.442s 00:34:26.025 sys 0m1.961s 00:34:26.025 19:27:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:26.025 19:27:41 -- common/autotest_common.sh@10 -- # set +x 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:34:26.025 19:27:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:34:26.025 19:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:26.025 19:27:41 -- common/autotest_common.sh@10 -- # set +x 00:34:26.025 ************************************ 00:34:26.025 START TEST raid_state_function_test_sb 00:34:26.025 ************************************ 00:34:26.025 19:27:41 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 4 true 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=128581 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128581' 00:34:26.025 Process raid pid: 128581 00:34:26.025 19:27:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128581 /var/tmp/spdk-raid.sock 00:34:26.026 19:27:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:26.026 19:27:41 -- common/autotest_common.sh@817 -- # '[' -z 128581 ']' 00:34:26.026 19:27:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:26.026 19:27:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:26.026 19:27:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:26.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:26.026 19:27:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:26.026 19:27:41 -- common/autotest_common.sh@10 -- # set +x 00:34:26.285 [2024-04-18 19:27:42.017381] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:34:26.285 [2024-04-18 19:27:42.017564] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.285 [2024-04-18 19:27:42.200346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.544 [2024-04-18 19:27:42.425813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.803 [2024-04-18 19:27:42.659337] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:27.387 19:27:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:27.387 19:27:43 -- common/autotest_common.sh@850 -- # return 0 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:27.387 [2024-04-18 19:27:43.216232] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:27.387 [2024-04-18 19:27:43.216312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:27.387 [2024-04-18 19:27:43.216324] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:27.387 [2024-04-18 19:27:43.216347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:27.387 [2024-04-18 19:27:43.216355] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:27.387 [2024-04-18 19:27:43.216401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:27.387 [2024-04-18 19:27:43.216410] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:27.387 [2024-04-18 19:27:43.216436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.387 19:27:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:27.651 19:27:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:27.651 "name": "Existed_Raid", 00:34:27.651 "uuid": "656381b1-688c-4de0-99f0-27bea84a3e1f", 00:34:27.651 "strip_size_kb": 64, 00:34:27.651 "state": "configuring", 00:34:27.651 "raid_level": "raid0", 00:34:27.651 "superblock": true, 00:34:27.651 "num_base_bdevs": 4, 00:34:27.651 "num_base_bdevs_discovered": 0, 00:34:27.651 "num_base_bdevs_operational": 4, 00:34:27.651 "base_bdevs_list": [ 00:34:27.651 { 00:34:27.651 "name": "BaseBdev1", 00:34:27.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.651 "is_configured": false, 00:34:27.651 "data_offset": 0, 00:34:27.651 "data_size": 0 00:34:27.651 }, 00:34:27.651 { 00:34:27.651 "name": "BaseBdev2", 00:34:27.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.651 "is_configured": false, 00:34:27.651 "data_offset": 0, 00:34:27.651 "data_size": 0 00:34:27.651 }, 00:34:27.651 { 00:34:27.651 "name": "BaseBdev3", 00:34:27.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.651 "is_configured": false, 00:34:27.651 "data_offset": 0, 00:34:27.651 "data_size": 0 00:34:27.651 }, 00:34:27.651 { 00:34:27.651 "name": "BaseBdev4", 00:34:27.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.651 "is_configured": false, 00:34:27.651 "data_offset": 0, 00:34:27.651 "data_size": 0 00:34:27.651 } 00:34:27.651 ] 00:34:27.651 }' 00:34:27.651 19:27:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:27.651 19:27:43 -- common/autotest_common.sh@10 -- # set +x 00:34:28.218 19:27:44 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:28.475 [2024-04-18 19:27:44.284342] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:28.475 [2024-04-18 19:27:44.284587] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:34:28.476 19:27:44 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:28.733 [2024-04-18 19:27:44.560579] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:28.733 [2024-04-18 19:27:44.561443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:28.733 [2024-04-18 19:27:44.561588] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:28.733 [2024-04-18 19:27:44.561783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:28.733 [2024-04-18 19:27:44.561982] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:28.733 [2024-04-18 19:27:44.562167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:28.733 [2024-04-18 19:27:44.562273] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:28.733 [2024-04-18 19:27:44.562425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:28.733 19:27:44 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:28.991 [2024-04-18 19:27:44.818511] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:28.991 BaseBdev1 00:34:28.991 19:27:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:34:28.991 19:27:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:34:28.991 19:27:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:28.991 19:27:44 -- common/autotest_common.sh@887 -- # local i 00:34:28.991 19:27:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:28.991 19:27:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:28.991 19:27:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:29.249 19:27:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:29.507 [ 00:34:29.507 { 00:34:29.507 "name": "BaseBdev1", 00:34:29.507 "aliases": [ 00:34:29.507 "a441906f-1eeb-49d1-bada-d9af1da69fac" 00:34:29.507 ], 00:34:29.507 "product_name": "Malloc disk", 00:34:29.507 "block_size": 512, 00:34:29.507 "num_blocks": 65536, 00:34:29.507 "uuid": "a441906f-1eeb-49d1-bada-d9af1da69fac", 00:34:29.507 "assigned_rate_limits": { 00:34:29.507 "rw_ios_per_sec": 0, 00:34:29.507 "rw_mbytes_per_sec": 0, 00:34:29.507 "r_mbytes_per_sec": 0, 00:34:29.507 "w_mbytes_per_sec": 0 00:34:29.507 }, 00:34:29.507 "claimed": true, 00:34:29.507 "claim_type": "exclusive_write", 00:34:29.507 "zoned": false, 00:34:29.507 "supported_io_types": { 00:34:29.507 "read": true, 00:34:29.507 "write": true, 00:34:29.507 "unmap": true, 00:34:29.507 "write_zeroes": true, 00:34:29.507 "flush": true, 00:34:29.507 "reset": true, 00:34:29.507 "compare": false, 00:34:29.507 "compare_and_write": false, 00:34:29.507 "abort": true, 00:34:29.507 "nvme_admin": false, 00:34:29.507 "nvme_io": false 00:34:29.507 }, 00:34:29.507 "memory_domains": [ 00:34:29.507 { 00:34:29.507 "dma_device_id": "system", 00:34:29.507 "dma_device_type": 1 00:34:29.507 }, 00:34:29.507 { 00:34:29.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:29.507 "dma_device_type": 2 00:34:29.507 } 00:34:29.507 ], 00:34:29.507 "driver_specific": {} 00:34:29.507 } 00:34:29.507 ] 00:34:29.507 19:27:45 -- common/autotest_common.sh@893 -- # return 0 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.507 19:27:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:29.764 19:27:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:29.764 "name": "Existed_Raid", 00:34:29.764 "uuid": "5767a586-ed02-4ff3-a645-fb750ff1cd04", 00:34:29.764 "strip_size_kb": 64, 00:34:29.764 "state": "configuring", 00:34:29.764 "raid_level": "raid0", 00:34:29.764 "superblock": true, 00:34:29.764 "num_base_bdevs": 4, 00:34:29.764 "num_base_bdevs_discovered": 1, 00:34:29.764 "num_base_bdevs_operational": 4, 00:34:29.764 "base_bdevs_list": [ 00:34:29.764 { 00:34:29.764 "name": "BaseBdev1", 00:34:29.764 "uuid": "a441906f-1eeb-49d1-bada-d9af1da69fac", 00:34:29.764 "is_configured": true, 00:34:29.764 "data_offset": 2048, 00:34:29.764 "data_size": 63488 00:34:29.764 }, 00:34:29.764 { 00:34:29.764 "name": "BaseBdev2", 00:34:29.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.764 "is_configured": false, 00:34:29.764 "data_offset": 0, 00:34:29.765 "data_size": 0 00:34:29.765 }, 00:34:29.765 { 00:34:29.765 "name": "BaseBdev3", 00:34:29.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.765 "is_configured": false, 00:34:29.765 "data_offset": 0, 00:34:29.765 "data_size": 0 00:34:29.765 }, 00:34:29.765 { 00:34:29.765 "name": "BaseBdev4", 00:34:29.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.765 "is_configured": false, 00:34:29.765 "data_offset": 0, 00:34:29.765 "data_size": 0 00:34:29.765 } 00:34:29.765 ] 00:34:29.765 }' 00:34:29.765 19:27:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:29.765 19:27:45 -- common/autotest_common.sh@10 -- # set +x 00:34:30.329 19:27:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:30.586 [2024-04-18 19:27:46.298914] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:30.586 [2024-04-18 19:27:46.298977] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:30.586 19:27:46 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:34:30.587 19:27:46 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:30.844 19:27:46 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:31.102 BaseBdev1 00:34:31.102 19:27:46 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:34:31.102 19:27:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:34:31.102 19:27:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:31.102 19:27:46 -- common/autotest_common.sh@887 -- # local i 00:34:31.102 19:27:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:31.102 19:27:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:31.102 19:27:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:31.359 19:27:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:31.616 [ 00:34:31.616 { 00:34:31.616 "name": "BaseBdev1", 00:34:31.616 "aliases": [ 00:34:31.616 "3f2a3d6f-b8e1-4506-9c50-14de9eb9b6cf" 00:34:31.616 ], 00:34:31.616 "product_name": "Malloc disk", 00:34:31.616 "block_size": 512, 00:34:31.616 "num_blocks": 65536, 00:34:31.616 "uuid": "3f2a3d6f-b8e1-4506-9c50-14de9eb9b6cf", 00:34:31.616 "assigned_rate_limits": { 00:34:31.616 "rw_ios_per_sec": 0, 00:34:31.616 "rw_mbytes_per_sec": 0, 00:34:31.616 "r_mbytes_per_sec": 0, 00:34:31.616 "w_mbytes_per_sec": 0 00:34:31.616 }, 00:34:31.616 "claimed": false, 00:34:31.616 "zoned": false, 00:34:31.616 "supported_io_types": { 00:34:31.616 "read": true, 00:34:31.616 "write": true, 00:34:31.616 "unmap": true, 00:34:31.616 "write_zeroes": true, 00:34:31.616 "flush": true, 00:34:31.616 "reset": true, 00:34:31.616 "compare": false, 00:34:31.616 "compare_and_write": false, 00:34:31.616 "abort": true, 00:34:31.616 "nvme_admin": false, 00:34:31.616 "nvme_io": false 00:34:31.616 }, 00:34:31.616 "memory_domains": [ 00:34:31.616 { 00:34:31.616 "dma_device_id": "system", 00:34:31.616 "dma_device_type": 1 00:34:31.616 }, 00:34:31.616 { 00:34:31.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:31.616 "dma_device_type": 2 00:34:31.616 } 00:34:31.616 ], 00:34:31.616 "driver_specific": {} 00:34:31.616 } 00:34:31.616 ] 00:34:31.616 19:27:47 -- common/autotest_common.sh@893 -- # return 0 00:34:31.616 19:27:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:31.875 [2024-04-18 19:27:47.698900] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:31.875 [2024-04-18 19:27:47.701017] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:31.875 [2024-04-18 19:27:47.701095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:31.875 [2024-04-18 19:27:47.701107] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:31.875 [2024-04-18 19:27:47.701132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:31.875 [2024-04-18 19:27:47.701141] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:31.875 [2024-04-18 19:27:47.701158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.875 19:27:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:32.133 19:27:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:32.133 "name": "Existed_Raid", 00:34:32.133 "uuid": "84b5bd2c-8275-4e12-b819-7d307f208498", 00:34:32.133 "strip_size_kb": 64, 00:34:32.133 "state": "configuring", 00:34:32.133 "raid_level": "raid0", 00:34:32.133 "superblock": true, 00:34:32.133 "num_base_bdevs": 4, 00:34:32.133 "num_base_bdevs_discovered": 1, 00:34:32.133 "num_base_bdevs_operational": 4, 00:34:32.133 "base_bdevs_list": [ 00:34:32.133 { 00:34:32.133 "name": "BaseBdev1", 00:34:32.133 "uuid": "3f2a3d6f-b8e1-4506-9c50-14de9eb9b6cf", 00:34:32.133 "is_configured": true, 00:34:32.133 "data_offset": 2048, 00:34:32.133 "data_size": 63488 00:34:32.133 }, 00:34:32.133 { 00:34:32.133 "name": "BaseBdev2", 00:34:32.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.133 "is_configured": false, 00:34:32.133 "data_offset": 0, 00:34:32.133 "data_size": 0 00:34:32.133 }, 00:34:32.133 { 00:34:32.133 "name": "BaseBdev3", 00:34:32.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.133 "is_configured": false, 00:34:32.133 "data_offset": 0, 00:34:32.133 "data_size": 0 00:34:32.133 }, 00:34:32.133 { 00:34:32.133 "name": "BaseBdev4", 00:34:32.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.133 "is_configured": false, 00:34:32.133 "data_offset": 0, 00:34:32.133 "data_size": 0 00:34:32.133 } 00:34:32.133 ] 00:34:32.133 }' 00:34:32.133 19:27:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:32.133 19:27:48 -- common/autotest_common.sh@10 -- # set +x 00:34:33.067 19:27:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:33.325 [2024-04-18 19:27:49.137935] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:33.325 BaseBdev2 00:34:33.325 19:27:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:34:33.325 19:27:49 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:34:33.325 19:27:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:33.325 19:27:49 -- common/autotest_common.sh@887 -- # local i 00:34:33.325 19:27:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:33.325 19:27:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:33.325 19:27:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:33.583 19:27:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:33.841 [ 00:34:33.841 { 00:34:33.841 "name": "BaseBdev2", 00:34:33.841 "aliases": [ 00:34:33.841 "25d2b4bf-dd8a-459e-993b-d8fc26e03ef1" 00:34:33.841 ], 00:34:33.841 "product_name": "Malloc disk", 00:34:33.841 "block_size": 512, 00:34:33.841 "num_blocks": 65536, 00:34:33.841 "uuid": "25d2b4bf-dd8a-459e-993b-d8fc26e03ef1", 00:34:33.841 "assigned_rate_limits": { 00:34:33.841 "rw_ios_per_sec": 0, 00:34:33.841 "rw_mbytes_per_sec": 0, 00:34:33.841 "r_mbytes_per_sec": 0, 00:34:33.841 "w_mbytes_per_sec": 0 00:34:33.841 }, 00:34:33.841 "claimed": true, 00:34:33.841 "claim_type": "exclusive_write", 00:34:33.841 "zoned": false, 00:34:33.841 "supported_io_types": { 00:34:33.841 "read": true, 00:34:33.841 "write": true, 00:34:33.841 "unmap": true, 00:34:33.841 "write_zeroes": true, 00:34:33.841 "flush": true, 00:34:33.841 "reset": true, 00:34:33.841 "compare": false, 00:34:33.841 "compare_and_write": false, 00:34:33.841 "abort": true, 00:34:33.841 "nvme_admin": false, 00:34:33.841 "nvme_io": false 00:34:33.841 }, 00:34:33.841 "memory_domains": [ 00:34:33.841 { 00:34:33.841 "dma_device_id": "system", 00:34:33.841 "dma_device_type": 1 00:34:33.841 }, 00:34:33.841 { 00:34:33.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:33.841 "dma_device_type": 2 00:34:33.841 } 00:34:33.841 ], 00:34:33.841 "driver_specific": {} 00:34:33.841 } 00:34:33.841 ] 00:34:33.841 19:27:49 -- common/autotest_common.sh@893 -- # return 0 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.841 19:27:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:34.099 19:27:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:34.099 "name": "Existed_Raid", 00:34:34.099 "uuid": "84b5bd2c-8275-4e12-b819-7d307f208498", 00:34:34.099 "strip_size_kb": 64, 00:34:34.099 "state": "configuring", 00:34:34.099 "raid_level": "raid0", 00:34:34.099 "superblock": true, 00:34:34.099 "num_base_bdevs": 4, 00:34:34.099 "num_base_bdevs_discovered": 2, 00:34:34.099 "num_base_bdevs_operational": 4, 00:34:34.099 "base_bdevs_list": [ 00:34:34.099 { 00:34:34.099 "name": "BaseBdev1", 00:34:34.100 "uuid": "3f2a3d6f-b8e1-4506-9c50-14de9eb9b6cf", 00:34:34.100 "is_configured": true, 00:34:34.100 "data_offset": 2048, 00:34:34.100 "data_size": 63488 00:34:34.100 }, 00:34:34.100 { 00:34:34.100 "name": "BaseBdev2", 00:34:34.100 "uuid": "25d2b4bf-dd8a-459e-993b-d8fc26e03ef1", 00:34:34.100 "is_configured": true, 00:34:34.100 "data_offset": 2048, 00:34:34.100 "data_size": 63488 00:34:34.100 }, 00:34:34.100 { 00:34:34.100 "name": "BaseBdev3", 00:34:34.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:34.100 "is_configured": false, 00:34:34.100 "data_offset": 0, 00:34:34.100 "data_size": 0 00:34:34.100 }, 00:34:34.100 { 00:34:34.100 "name": "BaseBdev4", 00:34:34.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:34.100 "is_configured": false, 00:34:34.100 "data_offset": 0, 00:34:34.100 "data_size": 0 00:34:34.100 } 00:34:34.100 ] 00:34:34.100 }' 00:34:34.100 19:27:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:34.100 19:27:49 -- common/autotest_common.sh@10 -- # set +x 00:34:35.032 19:27:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:35.290 [2024-04-18 19:27:50.972029] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:35.290 BaseBdev3 00:34:35.290 19:27:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:34:35.290 19:27:50 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:34:35.290 19:27:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:35.290 19:27:50 -- common/autotest_common.sh@887 -- # local i 00:34:35.290 19:27:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:35.290 19:27:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:35.290 19:27:50 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:35.548 19:27:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:35.807 [ 00:34:35.807 { 00:34:35.807 "name": "BaseBdev3", 00:34:35.807 "aliases": [ 00:34:35.807 "bbc2f8b4-2f40-4114-a69d-266d548ad18e" 00:34:35.807 ], 00:34:35.807 "product_name": "Malloc disk", 00:34:35.807 "block_size": 512, 00:34:35.807 "num_blocks": 65536, 00:34:35.807 "uuid": "bbc2f8b4-2f40-4114-a69d-266d548ad18e", 00:34:35.807 "assigned_rate_limits": { 00:34:35.807 "rw_ios_per_sec": 0, 00:34:35.807 "rw_mbytes_per_sec": 0, 00:34:35.807 "r_mbytes_per_sec": 0, 00:34:35.807 "w_mbytes_per_sec": 0 00:34:35.807 }, 00:34:35.807 "claimed": true, 00:34:35.807 "claim_type": "exclusive_write", 00:34:35.807 "zoned": false, 00:34:35.807 "supported_io_types": { 00:34:35.807 "read": true, 00:34:35.807 "write": true, 00:34:35.807 "unmap": true, 00:34:35.807 "write_zeroes": true, 00:34:35.807 "flush": true, 00:34:35.807 "reset": true, 00:34:35.807 "compare": false, 00:34:35.807 "compare_and_write": false, 00:34:35.807 "abort": true, 00:34:35.807 "nvme_admin": false, 00:34:35.807 "nvme_io": false 00:34:35.807 }, 00:34:35.807 "memory_domains": [ 00:34:35.807 { 00:34:35.807 "dma_device_id": "system", 00:34:35.807 "dma_device_type": 1 00:34:35.807 }, 00:34:35.807 { 00:34:35.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:35.807 "dma_device_type": 2 00:34:35.807 } 00:34:35.807 ], 00:34:35.807 "driver_specific": {} 00:34:35.807 } 00:34:35.807 ] 00:34:35.807 19:27:51 -- common/autotest_common.sh@893 -- # return 0 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:35.807 19:27:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.064 19:27:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:36.064 "name": "Existed_Raid", 00:34:36.064 "uuid": "84b5bd2c-8275-4e12-b819-7d307f208498", 00:34:36.064 "strip_size_kb": 64, 00:34:36.064 "state": "configuring", 00:34:36.065 "raid_level": "raid0", 00:34:36.065 "superblock": true, 00:34:36.065 "num_base_bdevs": 4, 00:34:36.065 "num_base_bdevs_discovered": 3, 00:34:36.065 "num_base_bdevs_operational": 4, 00:34:36.065 "base_bdevs_list": [ 00:34:36.065 { 00:34:36.065 "name": "BaseBdev1", 00:34:36.065 "uuid": "3f2a3d6f-b8e1-4506-9c50-14de9eb9b6cf", 00:34:36.065 "is_configured": true, 00:34:36.065 "data_offset": 2048, 00:34:36.065 "data_size": 63488 00:34:36.065 }, 00:34:36.065 { 00:34:36.065 "name": "BaseBdev2", 00:34:36.065 "uuid": "25d2b4bf-dd8a-459e-993b-d8fc26e03ef1", 00:34:36.065 "is_configured": true, 00:34:36.065 "data_offset": 2048, 00:34:36.065 "data_size": 63488 00:34:36.065 }, 00:34:36.065 { 00:34:36.065 "name": "BaseBdev3", 00:34:36.065 "uuid": "bbc2f8b4-2f40-4114-a69d-266d548ad18e", 00:34:36.065 "is_configured": true, 00:34:36.065 "data_offset": 2048, 00:34:36.065 "data_size": 63488 00:34:36.065 }, 00:34:36.065 { 00:34:36.065 "name": "BaseBdev4", 00:34:36.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.065 "is_configured": false, 00:34:36.065 "data_offset": 0, 00:34:36.065 "data_size": 0 00:34:36.065 } 00:34:36.065 ] 00:34:36.065 }' 00:34:36.065 19:27:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:36.065 19:27:51 -- common/autotest_common.sh@10 -- # set +x 00:34:36.662 19:27:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:36.921 [2024-04-18 19:27:52.833660] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:36.921 [2024-04-18 19:27:52.833916] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:34:36.921 [2024-04-18 19:27:52.833930] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:34:36.921 [2024-04-18 19:27:52.834072] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:34:36.921 BaseBdev4 00:34:36.921 [2024-04-18 19:27:52.834418] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:34:36.921 [2024-04-18 19:27:52.834431] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:34:36.921 [2024-04-18 19:27:52.834589] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:37.179 19:27:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:34:37.179 19:27:52 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:34:37.179 19:27:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:34:37.179 19:27:52 -- common/autotest_common.sh@887 -- # local i 00:34:37.179 19:27:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:34:37.179 19:27:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:34:37.179 19:27:52 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:37.179 19:27:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:37.745 [ 00:34:37.745 { 00:34:37.745 "name": "BaseBdev4", 00:34:37.745 "aliases": [ 00:34:37.745 "d1c3c5fa-0d28-4922-a60c-d42ca759fbfb" 00:34:37.745 ], 00:34:37.745 "product_name": "Malloc disk", 00:34:37.745 "block_size": 512, 00:34:37.745 "num_blocks": 65536, 00:34:37.745 "uuid": "d1c3c5fa-0d28-4922-a60c-d42ca759fbfb", 00:34:37.745 "assigned_rate_limits": { 00:34:37.745 "rw_ios_per_sec": 0, 00:34:37.745 "rw_mbytes_per_sec": 0, 00:34:37.745 "r_mbytes_per_sec": 0, 00:34:37.745 "w_mbytes_per_sec": 0 00:34:37.745 }, 00:34:37.745 "claimed": true, 00:34:37.745 "claim_type": "exclusive_write", 00:34:37.745 "zoned": false, 00:34:37.745 "supported_io_types": { 00:34:37.745 "read": true, 00:34:37.745 "write": true, 00:34:37.745 "unmap": true, 00:34:37.745 "write_zeroes": true, 00:34:37.745 "flush": true, 00:34:37.745 "reset": true, 00:34:37.745 "compare": false, 00:34:37.745 "compare_and_write": false, 00:34:37.745 "abort": true, 00:34:37.745 "nvme_admin": false, 00:34:37.745 "nvme_io": false 00:34:37.745 }, 00:34:37.745 "memory_domains": [ 00:34:37.745 { 00:34:37.745 "dma_device_id": "system", 00:34:37.745 "dma_device_type": 1 00:34:37.745 }, 00:34:37.745 { 00:34:37.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:37.745 "dma_device_type": 2 00:34:37.745 } 00:34:37.745 ], 00:34:37.745 "driver_specific": {} 00:34:37.745 } 00:34:37.745 ] 00:34:37.745 19:27:53 -- common/autotest_common.sh@893 -- # return 0 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:37.745 19:27:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:38.002 19:27:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:38.002 "name": "Existed_Raid", 00:34:38.002 "uuid": "84b5bd2c-8275-4e12-b819-7d307f208498", 00:34:38.002 "strip_size_kb": 64, 00:34:38.002 "state": "online", 00:34:38.002 "raid_level": "raid0", 00:34:38.002 "superblock": true, 00:34:38.002 "num_base_bdevs": 4, 00:34:38.002 "num_base_bdevs_discovered": 4, 00:34:38.002 "num_base_bdevs_operational": 4, 00:34:38.002 "base_bdevs_list": [ 00:34:38.002 { 00:34:38.002 "name": "BaseBdev1", 00:34:38.002 "uuid": "3f2a3d6f-b8e1-4506-9c50-14de9eb9b6cf", 00:34:38.002 "is_configured": true, 00:34:38.002 "data_offset": 2048, 00:34:38.002 "data_size": 63488 00:34:38.002 }, 00:34:38.002 { 00:34:38.002 "name": "BaseBdev2", 00:34:38.002 "uuid": "25d2b4bf-dd8a-459e-993b-d8fc26e03ef1", 00:34:38.002 "is_configured": true, 00:34:38.002 "data_offset": 2048, 00:34:38.002 "data_size": 63488 00:34:38.002 }, 00:34:38.002 { 00:34:38.002 "name": "BaseBdev3", 00:34:38.002 "uuid": "bbc2f8b4-2f40-4114-a69d-266d548ad18e", 00:34:38.002 "is_configured": true, 00:34:38.002 "data_offset": 2048, 00:34:38.003 "data_size": 63488 00:34:38.003 }, 00:34:38.003 { 00:34:38.003 "name": "BaseBdev4", 00:34:38.003 "uuid": "d1c3c5fa-0d28-4922-a60c-d42ca759fbfb", 00:34:38.003 "is_configured": true, 00:34:38.003 "data_offset": 2048, 00:34:38.003 "data_size": 63488 00:34:38.003 } 00:34:38.003 ] 00:34:38.003 }' 00:34:38.003 19:27:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:38.003 19:27:53 -- common/autotest_common.sh@10 -- # set +x 00:34:38.937 19:27:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:38.937 [2024-04-18 19:27:54.770228] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:38.937 [2024-04-18 19:27:54.770271] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:38.937 [2024-04-18 19:27:54.770335] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.195 19:27:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:39.453 19:27:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:39.453 "name": "Existed_Raid", 00:34:39.453 "uuid": "84b5bd2c-8275-4e12-b819-7d307f208498", 00:34:39.453 "strip_size_kb": 64, 00:34:39.453 "state": "offline", 00:34:39.453 "raid_level": "raid0", 00:34:39.453 "superblock": true, 00:34:39.453 "num_base_bdevs": 4, 00:34:39.453 "num_base_bdevs_discovered": 3, 00:34:39.453 "num_base_bdevs_operational": 3, 00:34:39.453 "base_bdevs_list": [ 00:34:39.453 { 00:34:39.453 "name": null, 00:34:39.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:39.453 "is_configured": false, 00:34:39.453 "data_offset": 2048, 00:34:39.453 "data_size": 63488 00:34:39.453 }, 00:34:39.453 { 00:34:39.453 "name": "BaseBdev2", 00:34:39.453 "uuid": "25d2b4bf-dd8a-459e-993b-d8fc26e03ef1", 00:34:39.453 "is_configured": true, 00:34:39.453 "data_offset": 2048, 00:34:39.453 "data_size": 63488 00:34:39.453 }, 00:34:39.453 { 00:34:39.453 "name": "BaseBdev3", 00:34:39.453 "uuid": "bbc2f8b4-2f40-4114-a69d-266d548ad18e", 00:34:39.453 "is_configured": true, 00:34:39.453 "data_offset": 2048, 00:34:39.453 "data_size": 63488 00:34:39.453 }, 00:34:39.453 { 00:34:39.453 "name": "BaseBdev4", 00:34:39.453 "uuid": "d1c3c5fa-0d28-4922-a60c-d42ca759fbfb", 00:34:39.453 "is_configured": true, 00:34:39.453 "data_offset": 2048, 00:34:39.453 "data_size": 63488 00:34:39.453 } 00:34:39.453 ] 00:34:39.453 }' 00:34:39.453 19:27:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:39.453 19:27:55 -- common/autotest_common.sh@10 -- # set +x 00:34:40.388 19:27:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:34:40.388 19:27:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:40.388 19:27:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:40.388 19:27:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.388 19:27:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:40.388 19:27:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:40.388 19:27:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:40.647 [2024-04-18 19:27:56.506873] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:40.905 19:27:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:40.905 19:27:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:40.905 19:27:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.905 19:27:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:41.163 19:27:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:41.163 19:27:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:41.163 19:27:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:41.422 [2024-04-18 19:27:57.140577] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:41.422 19:27:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:41.422 19:27:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:41.422 19:27:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.422 19:27:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:41.679 19:27:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:41.679 19:27:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:41.679 19:27:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:34:41.938 [2024-04-18 19:27:57.739858] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:34:41.938 [2024-04-18 19:27:57.739927] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:34:41.938 19:27:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:41.938 19:27:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:41.938 19:27:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.938 19:27:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:34:42.504 19:27:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:34:42.504 19:27:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:34:42.504 19:27:58 -- bdev/bdev_raid.sh@287 -- # killprocess 128581 00:34:42.504 19:27:58 -- common/autotest_common.sh@936 -- # '[' -z 128581 ']' 00:34:42.504 19:27:58 -- common/autotest_common.sh@940 -- # kill -0 128581 00:34:42.504 19:27:58 -- common/autotest_common.sh@941 -- # uname 00:34:42.504 19:27:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:42.504 19:27:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128581 00:34:42.504 killing process with pid 128581 00:34:42.504 19:27:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:42.504 19:27:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:42.504 19:27:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128581' 00:34:42.504 19:27:58 -- common/autotest_common.sh@955 -- # kill 128581 00:34:42.504 19:27:58 -- common/autotest_common.sh@960 -- # wait 128581 00:34:42.504 [2024-04-18 19:27:58.247912] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:42.504 [2024-04-18 19:27:58.248319] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:43.877 ************************************ 00:34:43.877 END TEST raid_state_function_test_sb 00:34:43.877 ************************************ 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:34:43.877 00:34:43.877 real 0m17.733s 00:34:43.877 user 0m31.305s 00:34:43.877 sys 0m2.198s 00:34:43.877 19:27:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:43.877 19:27:59 -- common/autotest_common.sh@10 -- # set +x 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:34:43.877 19:27:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:34:43.877 19:27:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:43.877 19:27:59 -- common/autotest_common.sh@10 -- # set +x 00:34:43.877 ************************************ 00:34:43.877 START TEST raid_superblock_test 00:34:43.877 ************************************ 00:34:43.877 19:27:59 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 4 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:34:43.877 19:27:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:34:43.878 19:27:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:34:43.878 19:27:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:34:43.878 19:27:59 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:34:43.878 19:27:59 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:34:43.878 19:27:59 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:34:43.878 19:27:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=129105 00:34:43.878 19:27:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 129105 /var/tmp/spdk-raid.sock 00:34:43.878 19:27:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:34:43.878 19:27:59 -- common/autotest_common.sh@817 -- # '[' -z 129105 ']' 00:34:43.878 19:27:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:43.878 19:27:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:43.878 19:27:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:43.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:43.878 19:27:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:43.878 19:27:59 -- common/autotest_common.sh@10 -- # set +x 00:34:44.136 [2024-04-18 19:27:59.842494] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:34:44.136 [2024-04-18 19:27:59.842847] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129105 ] 00:34:44.136 [2024-04-18 19:28:00.022309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.397 [2024-04-18 19:28:00.315472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.655 [2024-04-18 19:28:00.553307] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:44.914 19:28:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:44.914 19:28:00 -- common/autotest_common.sh@850 -- # return 0 00:34:44.914 19:28:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:34:44.914 19:28:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:44.914 19:28:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:34:44.914 19:28:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:34:44.914 19:28:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:44.914 19:28:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:44.914 19:28:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:34:44.914 19:28:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:44.914 19:28:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:34:45.479 malloc1 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:45.479 [2024-04-18 19:28:01.352554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:45.479 [2024-04-18 19:28:01.352659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:45.479 [2024-04-18 19:28:01.352693] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:34:45.479 [2024-04-18 19:28:01.352749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:45.479 [2024-04-18 19:28:01.355413] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:45.479 [2024-04-18 19:28:01.355484] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:45.479 pt1 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:45.479 19:28:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:34:46.045 malloc2 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:46.045 [2024-04-18 19:28:01.933178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:46.045 [2024-04-18 19:28:01.933275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:46.045 [2024-04-18 19:28:01.933322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:34:46.045 [2024-04-18 19:28:01.933382] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:46.045 [2024-04-18 19:28:01.935964] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:46.045 [2024-04-18 19:28:01.936026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:46.045 pt2 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:46.045 19:28:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:34:46.633 malloc3 00:34:46.633 19:28:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:46.633 [2024-04-18 19:28:02.559621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:46.633 [2024-04-18 19:28:02.559714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:46.633 [2024-04-18 19:28:02.559756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:46.633 [2024-04-18 19:28:02.559800] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:46.892 [2024-04-18 19:28:02.562392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:46.892 [2024-04-18 19:28:02.562463] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:46.892 pt3 00:34:46.892 19:28:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:34:46.892 19:28:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:46.892 19:28:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:34:46.892 19:28:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:34:46.892 19:28:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:34:46.892 19:28:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:46.892 19:28:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:34:46.892 19:28:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:46.892 19:28:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:34:47.150 malloc4 00:34:47.150 19:28:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:47.150 [2024-04-18 19:28:03.043431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:47.150 [2024-04-18 19:28:03.043578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:47.150 [2024-04-18 19:28:03.043643] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:47.150 [2024-04-18 19:28:03.043701] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:47.150 [2024-04-18 19:28:03.049538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:47.151 [2024-04-18 19:28:03.049633] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:47.151 pt4 00:34:47.151 19:28:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:34:47.151 19:28:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:47.151 19:28:03 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:34:47.411 [2024-04-18 19:28:03.274029] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:47.411 [2024-04-18 19:28:03.276212] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:47.411 [2024-04-18 19:28:03.276293] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:47.411 [2024-04-18 19:28:03.276375] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:47.411 [2024-04-18 19:28:03.276585] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:34:47.411 [2024-04-18 19:28:03.276604] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:34:47.411 [2024-04-18 19:28:03.276746] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:34:47.411 [2024-04-18 19:28:03.277110] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:34:47.411 [2024-04-18 19:28:03.277129] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:34:47.411 [2024-04-18 19:28:03.277295] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:47.411 19:28:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.671 19:28:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:47.671 "name": "raid_bdev1", 00:34:47.671 "uuid": "c8299c08-c4b6-4c78-99a7-1f8d437229ed", 00:34:47.671 "strip_size_kb": 64, 00:34:47.671 "state": "online", 00:34:47.671 "raid_level": "raid0", 00:34:47.671 "superblock": true, 00:34:47.671 "num_base_bdevs": 4, 00:34:47.671 "num_base_bdevs_discovered": 4, 00:34:47.671 "num_base_bdevs_operational": 4, 00:34:47.671 "base_bdevs_list": [ 00:34:47.671 { 00:34:47.671 "name": "pt1", 00:34:47.671 "uuid": "0d53bb99-65d0-5e30-bd4c-d3b051169c72", 00:34:47.671 "is_configured": true, 00:34:47.671 "data_offset": 2048, 00:34:47.671 "data_size": 63488 00:34:47.671 }, 00:34:47.671 { 00:34:47.671 "name": "pt2", 00:34:47.671 "uuid": "00c08c77-50e0-5ebb-a9b4-351a496e56f4", 00:34:47.671 "is_configured": true, 00:34:47.671 "data_offset": 2048, 00:34:47.671 "data_size": 63488 00:34:47.671 }, 00:34:47.671 { 00:34:47.671 "name": "pt3", 00:34:47.671 "uuid": "27726a61-9f69-5797-94d8-b29fed6aa4ca", 00:34:47.671 "is_configured": true, 00:34:47.671 "data_offset": 2048, 00:34:47.671 "data_size": 63488 00:34:47.671 }, 00:34:47.671 { 00:34:47.671 "name": "pt4", 00:34:47.671 "uuid": "46e9bbfe-e2f8-5879-be4a-999d4519fd9c", 00:34:47.671 "is_configured": true, 00:34:47.671 "data_offset": 2048, 00:34:47.671 "data_size": 63488 00:34:47.671 } 00:34:47.671 ] 00:34:47.671 }' 00:34:47.671 19:28:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:47.671 19:28:03 -- common/autotest_common.sh@10 -- # set +x 00:34:48.239 19:28:04 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:34:48.239 19:28:04 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:48.497 [2024-04-18 19:28:04.322482] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:48.497 19:28:04 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c8299c08-c4b6-4c78-99a7-1f8d437229ed 00:34:48.497 19:28:04 -- bdev/bdev_raid.sh@380 -- # '[' -z c8299c08-c4b6-4c78-99a7-1f8d437229ed ']' 00:34:48.497 19:28:04 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:48.756 [2024-04-18 19:28:04.602236] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:48.756 [2024-04-18 19:28:04.602272] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:48.756 [2024-04-18 19:28:04.602356] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:48.756 [2024-04-18 19:28:04.602431] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:48.756 [2024-04-18 19:28:04.602443] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:34:48.756 19:28:04 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.756 19:28:04 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:34:49.044 19:28:04 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:34:49.044 19:28:04 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:34:49.044 19:28:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:34:49.044 19:28:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:49.330 19:28:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:34:49.330 19:28:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:49.589 19:28:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:34:49.589 19:28:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:49.848 19:28:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:34:49.848 19:28:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:50.105 19:28:05 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:50.105 19:28:05 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:50.370 19:28:06 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:34:50.370 19:28:06 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:50.370 19:28:06 -- common/autotest_common.sh@638 -- # local es=0 00:34:50.370 19:28:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:50.370 19:28:06 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:50.370 19:28:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:50.370 19:28:06 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:50.370 19:28:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:50.370 19:28:06 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:50.370 19:28:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:50.370 19:28:06 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:50.370 19:28:06 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:50.370 19:28:06 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:50.949 [2024-04-18 19:28:06.566869] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:50.949 [2024-04-18 19:28:06.569502] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:50.949 [2024-04-18 19:28:06.569588] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:50.949 [2024-04-18 19:28:06.569646] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:34:50.949 [2024-04-18 19:28:06.569715] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:34:50.949 [2024-04-18 19:28:06.569814] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:34:50.949 [2024-04-18 19:28:06.569862] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:34:50.949 [2024-04-18 19:28:06.569951] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:34:50.949 [2024-04-18 19:28:06.569989] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:50.949 [2024-04-18 19:28:06.570007] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:34:50.949 request: 00:34:50.949 { 00:34:50.949 "name": "raid_bdev1", 00:34:50.949 "raid_level": "raid0", 00:34:50.949 "base_bdevs": [ 00:34:50.949 "malloc1", 00:34:50.949 "malloc2", 00:34:50.949 "malloc3", 00:34:50.949 "malloc4" 00:34:50.949 ], 00:34:50.949 "superblock": false, 00:34:50.949 "strip_size_kb": 64, 00:34:50.949 "method": "bdev_raid_create", 00:34:50.949 "req_id": 1 00:34:50.949 } 00:34:50.949 Got JSON-RPC error response 00:34:50.949 response: 00:34:50.949 { 00:34:50.949 "code": -17, 00:34:50.949 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:50.949 } 00:34:50.949 19:28:06 -- common/autotest_common.sh@641 -- # es=1 00:34:50.949 19:28:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:34:50.949 19:28:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:34:50.949 19:28:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:34:50.949 19:28:06 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:50.949 19:28:06 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:34:50.949 19:28:06 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:34:50.949 19:28:06 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:34:50.949 19:28:06 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:51.218 [2024-04-18 19:28:07.027744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:51.218 [2024-04-18 19:28:07.027870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:51.218 [2024-04-18 19:28:07.027914] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:51.218 [2024-04-18 19:28:07.027950] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:51.218 [2024-04-18 19:28:07.031190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:51.218 [2024-04-18 19:28:07.031303] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:51.218 [2024-04-18 19:28:07.031501] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:34:51.218 [2024-04-18 19:28:07.031593] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:51.218 pt1 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:51.218 19:28:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.489 19:28:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:51.489 "name": "raid_bdev1", 00:34:51.489 "uuid": "c8299c08-c4b6-4c78-99a7-1f8d437229ed", 00:34:51.489 "strip_size_kb": 64, 00:34:51.489 "state": "configuring", 00:34:51.489 "raid_level": "raid0", 00:34:51.489 "superblock": true, 00:34:51.489 "num_base_bdevs": 4, 00:34:51.489 "num_base_bdevs_discovered": 1, 00:34:51.489 "num_base_bdevs_operational": 4, 00:34:51.489 "base_bdevs_list": [ 00:34:51.489 { 00:34:51.489 "name": "pt1", 00:34:51.489 "uuid": "0d53bb99-65d0-5e30-bd4c-d3b051169c72", 00:34:51.489 "is_configured": true, 00:34:51.489 "data_offset": 2048, 00:34:51.489 "data_size": 63488 00:34:51.489 }, 00:34:51.489 { 00:34:51.489 "name": null, 00:34:51.489 "uuid": "00c08c77-50e0-5ebb-a9b4-351a496e56f4", 00:34:51.489 "is_configured": false, 00:34:51.489 "data_offset": 2048, 00:34:51.489 "data_size": 63488 00:34:51.489 }, 00:34:51.489 { 00:34:51.489 "name": null, 00:34:51.489 "uuid": "27726a61-9f69-5797-94d8-b29fed6aa4ca", 00:34:51.489 "is_configured": false, 00:34:51.489 "data_offset": 2048, 00:34:51.489 "data_size": 63488 00:34:51.489 }, 00:34:51.489 { 00:34:51.489 "name": null, 00:34:51.489 "uuid": "46e9bbfe-e2f8-5879-be4a-999d4519fd9c", 00:34:51.489 "is_configured": false, 00:34:51.489 "data_offset": 2048, 00:34:51.489 "data_size": 63488 00:34:51.489 } 00:34:51.489 ] 00:34:51.489 }' 00:34:51.489 19:28:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:51.489 19:28:07 -- common/autotest_common.sh@10 -- # set +x 00:34:52.459 19:28:08 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:34:52.459 19:28:08 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:52.459 [2024-04-18 19:28:08.272216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:52.459 [2024-04-18 19:28:08.272323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:52.459 [2024-04-18 19:28:08.272368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:52.459 [2024-04-18 19:28:08.272390] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:52.459 [2024-04-18 19:28:08.273190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:52.459 [2024-04-18 19:28:08.273258] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:52.460 [2024-04-18 19:28:08.273615] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:52.460 [2024-04-18 19:28:08.273655] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:52.460 pt2 00:34:52.460 19:28:08 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:52.717 [2024-04-18 19:28:08.580306] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:52.717 19:28:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:53.026 19:28:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:53.026 "name": "raid_bdev1", 00:34:53.026 "uuid": "c8299c08-c4b6-4c78-99a7-1f8d437229ed", 00:34:53.026 "strip_size_kb": 64, 00:34:53.026 "state": "configuring", 00:34:53.026 "raid_level": "raid0", 00:34:53.026 "superblock": true, 00:34:53.026 "num_base_bdevs": 4, 00:34:53.026 "num_base_bdevs_discovered": 1, 00:34:53.026 "num_base_bdevs_operational": 4, 00:34:53.026 "base_bdevs_list": [ 00:34:53.026 { 00:34:53.026 "name": "pt1", 00:34:53.026 "uuid": "0d53bb99-65d0-5e30-bd4c-d3b051169c72", 00:34:53.026 "is_configured": true, 00:34:53.026 "data_offset": 2048, 00:34:53.026 "data_size": 63488 00:34:53.026 }, 00:34:53.026 { 00:34:53.026 "name": null, 00:34:53.026 "uuid": "00c08c77-50e0-5ebb-a9b4-351a496e56f4", 00:34:53.026 "is_configured": false, 00:34:53.026 "data_offset": 2048, 00:34:53.026 "data_size": 63488 00:34:53.026 }, 00:34:53.026 { 00:34:53.026 "name": null, 00:34:53.026 "uuid": "27726a61-9f69-5797-94d8-b29fed6aa4ca", 00:34:53.026 "is_configured": false, 00:34:53.026 "data_offset": 2048, 00:34:53.026 "data_size": 63488 00:34:53.026 }, 00:34:53.026 { 00:34:53.026 "name": null, 00:34:53.026 "uuid": "46e9bbfe-e2f8-5879-be4a-999d4519fd9c", 00:34:53.026 "is_configured": false, 00:34:53.026 "data_offset": 2048, 00:34:53.026 "data_size": 63488 00:34:53.026 } 00:34:53.026 ] 00:34:53.026 }' 00:34:53.026 19:28:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:53.026 19:28:08 -- common/autotest_common.sh@10 -- # set +x 00:34:53.593 19:28:09 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:34:53.593 19:28:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:34:53.593 19:28:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:54.161 [2024-04-18 19:28:09.784716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:54.161 [2024-04-18 19:28:09.784827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.161 [2024-04-18 19:28:09.784869] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:54.161 [2024-04-18 19:28:09.784894] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.161 [2024-04-18 19:28:09.785380] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.161 [2024-04-18 19:28:09.785447] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:54.161 [2024-04-18 19:28:09.785551] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:54.161 [2024-04-18 19:28:09.785587] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:54.161 pt2 00:34:54.161 19:28:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:34:54.161 19:28:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:34:54.161 19:28:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:54.161 [2024-04-18 19:28:10.012749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:54.161 [2024-04-18 19:28:10.012854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.161 [2024-04-18 19:28:10.012889] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:54.161 [2024-04-18 19:28:10.012915] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.161 [2024-04-18 19:28:10.013390] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.161 [2024-04-18 19:28:10.013454] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:54.161 [2024-04-18 19:28:10.013562] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:34:54.161 [2024-04-18 19:28:10.013592] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:54.161 pt3 00:34:54.161 19:28:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:34:54.161 19:28:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:34:54.161 19:28:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:54.420 [2024-04-18 19:28:10.248805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:54.420 [2024-04-18 19:28:10.248914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.420 [2024-04-18 19:28:10.248958] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:34:54.420 [2024-04-18 19:28:10.248988] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.420 [2024-04-18 19:28:10.249473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.420 [2024-04-18 19:28:10.249531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:54.420 [2024-04-18 19:28:10.249646] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:34:54.420 [2024-04-18 19:28:10.249672] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:54.420 [2024-04-18 19:28:10.249801] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:34:54.420 [2024-04-18 19:28:10.249819] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:34:54.420 [2024-04-18 19:28:10.249928] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:54.420 [2024-04-18 19:28:10.250242] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:34:54.420 [2024-04-18 19:28:10.250262] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:34:54.420 [2024-04-18 19:28:10.250403] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:54.420 pt4 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:54.420 19:28:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.679 19:28:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:54.679 "name": "raid_bdev1", 00:34:54.679 "uuid": "c8299c08-c4b6-4c78-99a7-1f8d437229ed", 00:34:54.679 "strip_size_kb": 64, 00:34:54.679 "state": "online", 00:34:54.679 "raid_level": "raid0", 00:34:54.679 "superblock": true, 00:34:54.679 "num_base_bdevs": 4, 00:34:54.679 "num_base_bdevs_discovered": 4, 00:34:54.679 "num_base_bdevs_operational": 4, 00:34:54.679 "base_bdevs_list": [ 00:34:54.679 { 00:34:54.679 "name": "pt1", 00:34:54.679 "uuid": "0d53bb99-65d0-5e30-bd4c-d3b051169c72", 00:34:54.679 "is_configured": true, 00:34:54.679 "data_offset": 2048, 00:34:54.679 "data_size": 63488 00:34:54.679 }, 00:34:54.679 { 00:34:54.679 "name": "pt2", 00:34:54.679 "uuid": "00c08c77-50e0-5ebb-a9b4-351a496e56f4", 00:34:54.679 "is_configured": true, 00:34:54.679 "data_offset": 2048, 00:34:54.679 "data_size": 63488 00:34:54.679 }, 00:34:54.679 { 00:34:54.679 "name": "pt3", 00:34:54.679 "uuid": "27726a61-9f69-5797-94d8-b29fed6aa4ca", 00:34:54.679 "is_configured": true, 00:34:54.679 "data_offset": 2048, 00:34:54.679 "data_size": 63488 00:34:54.679 }, 00:34:54.679 { 00:34:54.679 "name": "pt4", 00:34:54.680 "uuid": "46e9bbfe-e2f8-5879-be4a-999d4519fd9c", 00:34:54.680 "is_configured": true, 00:34:54.680 "data_offset": 2048, 00:34:54.680 "data_size": 63488 00:34:54.680 } 00:34:54.680 ] 00:34:54.680 }' 00:34:54.680 19:28:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:54.680 19:28:10 -- common/autotest_common.sh@10 -- # set +x 00:34:55.289 19:28:11 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:34:55.289 19:28:11 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:55.568 [2024-04-18 19:28:11.373385] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:55.568 19:28:11 -- bdev/bdev_raid.sh@430 -- # '[' c8299c08-c4b6-4c78-99a7-1f8d437229ed '!=' c8299c08-c4b6-4c78-99a7-1f8d437229ed ']' 00:34:55.568 19:28:11 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:34:55.568 19:28:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:55.568 19:28:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:34:55.568 19:28:11 -- bdev/bdev_raid.sh@511 -- # killprocess 129105 00:34:55.568 19:28:11 -- common/autotest_common.sh@936 -- # '[' -z 129105 ']' 00:34:55.568 19:28:11 -- common/autotest_common.sh@940 -- # kill -0 129105 00:34:55.568 19:28:11 -- common/autotest_common.sh@941 -- # uname 00:34:55.568 19:28:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:55.568 19:28:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129105 00:34:55.568 19:28:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:55.568 19:28:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:55.568 19:28:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129105' 00:34:55.568 killing process with pid 129105 00:34:55.568 19:28:11 -- common/autotest_common.sh@955 -- # kill 129105 00:34:55.568 19:28:11 -- common/autotest_common.sh@960 -- # wait 129105 00:34:55.568 [2024-04-18 19:28:11.409756] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:55.568 [2024-04-18 19:28:11.409839] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:55.568 [2024-04-18 19:28:11.409908] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:55.568 [2024-04-18 19:28:11.409927] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:34:56.134 [2024-04-18 19:28:11.846544] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:57.510 ************************************ 00:34:57.510 END TEST raid_superblock_test 00:34:57.510 ************************************ 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@513 -- # return 0 00:34:57.510 00:34:57.510 real 0m13.502s 00:34:57.510 user 0m23.109s 00:34:57.510 sys 0m1.722s 00:34:57.510 19:28:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:57.510 19:28:13 -- common/autotest_common.sh@10 -- # set +x 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:34:57.510 19:28:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:34:57.510 19:28:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:57.510 19:28:13 -- common/autotest_common.sh@10 -- # set +x 00:34:57.510 ************************************ 00:34:57.510 START TEST raid_state_function_test 00:34:57.510 ************************************ 00:34:57.510 19:28:13 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 4 false 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=129468 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129468' 00:34:57.510 Process raid pid: 129468 00:34:57.510 19:28:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129468 /var/tmp/spdk-raid.sock 00:34:57.510 19:28:13 -- common/autotest_common.sh@817 -- # '[' -z 129468 ']' 00:34:57.510 19:28:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:57.510 19:28:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:57.510 19:28:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:57.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:57.510 19:28:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:57.510 19:28:13 -- common/autotest_common.sh@10 -- # set +x 00:34:57.510 [2024-04-18 19:28:13.420107] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:34:57.510 [2024-04-18 19:28:13.420537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:57.769 [2024-04-18 19:28:13.605007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.027 [2024-04-18 19:28:13.830260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.285 [2024-04-18 19:28:14.054320] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:58.544 19:28:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:58.544 19:28:14 -- common/autotest_common.sh@850 -- # return 0 00:34:58.544 19:28:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:58.803 [2024-04-18 19:28:14.558698] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:58.803 [2024-04-18 19:28:14.558947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:58.803 [2024-04-18 19:28:14.559038] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:58.803 [2024-04-18 19:28:14.559096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:58.803 [2024-04-18 19:28:14.559173] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:58.803 [2024-04-18 19:28:14.559243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:58.803 [2024-04-18 19:28:14.559459] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:58.803 [2024-04-18 19:28:14.559529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.803 19:28:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:59.061 19:28:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:59.061 "name": "Existed_Raid", 00:34:59.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.061 "strip_size_kb": 64, 00:34:59.061 "state": "configuring", 00:34:59.061 "raid_level": "concat", 00:34:59.061 "superblock": false, 00:34:59.061 "num_base_bdevs": 4, 00:34:59.061 "num_base_bdevs_discovered": 0, 00:34:59.061 "num_base_bdevs_operational": 4, 00:34:59.061 "base_bdevs_list": [ 00:34:59.061 { 00:34:59.061 "name": "BaseBdev1", 00:34:59.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.061 "is_configured": false, 00:34:59.061 "data_offset": 0, 00:34:59.061 "data_size": 0 00:34:59.061 }, 00:34:59.061 { 00:34:59.061 "name": "BaseBdev2", 00:34:59.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.061 "is_configured": false, 00:34:59.061 "data_offset": 0, 00:34:59.061 "data_size": 0 00:34:59.061 }, 00:34:59.061 { 00:34:59.061 "name": "BaseBdev3", 00:34:59.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.061 "is_configured": false, 00:34:59.061 "data_offset": 0, 00:34:59.061 "data_size": 0 00:34:59.061 }, 00:34:59.061 { 00:34:59.061 "name": "BaseBdev4", 00:34:59.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.061 "is_configured": false, 00:34:59.061 "data_offset": 0, 00:34:59.061 "data_size": 0 00:34:59.061 } 00:34:59.061 ] 00:34:59.061 }' 00:34:59.061 19:28:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:59.062 19:28:14 -- common/autotest_common.sh@10 -- # set +x 00:34:59.627 19:28:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:59.885 [2024-04-18 19:28:15.714857] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:59.885 [2024-04-18 19:28:15.715061] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:34:59.885 19:28:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:00.145 [2024-04-18 19:28:15.930929] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:00.145 [2024-04-18 19:28:15.931146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:00.145 [2024-04-18 19:28:15.931237] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:00.145 [2024-04-18 19:28:15.931293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:00.145 [2024-04-18 19:28:15.931361] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:00.145 [2024-04-18 19:28:15.931533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:00.145 [2024-04-18 19:28:15.931607] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:00.145 [2024-04-18 19:28:15.931659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:00.145 19:28:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:00.403 [2024-04-18 19:28:16.244860] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:00.403 BaseBdev1 00:35:00.403 19:28:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:35:00.403 19:28:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:35:00.403 19:28:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:00.403 19:28:16 -- common/autotest_common.sh@887 -- # local i 00:35:00.403 19:28:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:00.403 19:28:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:00.403 19:28:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:00.661 19:28:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:00.920 [ 00:35:00.920 { 00:35:00.920 "name": "BaseBdev1", 00:35:00.920 "aliases": [ 00:35:00.920 "d47ae99f-f2e9-4b42-894d-b7731bbfaae4" 00:35:00.920 ], 00:35:00.920 "product_name": "Malloc disk", 00:35:00.920 "block_size": 512, 00:35:00.920 "num_blocks": 65536, 00:35:00.920 "uuid": "d47ae99f-f2e9-4b42-894d-b7731bbfaae4", 00:35:00.920 "assigned_rate_limits": { 00:35:00.920 "rw_ios_per_sec": 0, 00:35:00.920 "rw_mbytes_per_sec": 0, 00:35:00.920 "r_mbytes_per_sec": 0, 00:35:00.920 "w_mbytes_per_sec": 0 00:35:00.920 }, 00:35:00.920 "claimed": true, 00:35:00.920 "claim_type": "exclusive_write", 00:35:00.920 "zoned": false, 00:35:00.920 "supported_io_types": { 00:35:00.920 "read": true, 00:35:00.920 "write": true, 00:35:00.920 "unmap": true, 00:35:00.920 "write_zeroes": true, 00:35:00.920 "flush": true, 00:35:00.920 "reset": true, 00:35:00.920 "compare": false, 00:35:00.920 "compare_and_write": false, 00:35:00.920 "abort": true, 00:35:00.920 "nvme_admin": false, 00:35:00.920 "nvme_io": false 00:35:00.920 }, 00:35:00.920 "memory_domains": [ 00:35:00.920 { 00:35:00.920 "dma_device_id": "system", 00:35:00.920 "dma_device_type": 1 00:35:00.920 }, 00:35:00.920 { 00:35:00.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.920 "dma_device_type": 2 00:35:00.920 } 00:35:00.920 ], 00:35:00.920 "driver_specific": {} 00:35:00.920 } 00:35:00.920 ] 00:35:00.920 19:28:16 -- common/autotest_common.sh@893 -- # return 0 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.920 19:28:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.178 19:28:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:01.178 "name": "Existed_Raid", 00:35:01.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.178 "strip_size_kb": 64, 00:35:01.178 "state": "configuring", 00:35:01.178 "raid_level": "concat", 00:35:01.178 "superblock": false, 00:35:01.178 "num_base_bdevs": 4, 00:35:01.178 "num_base_bdevs_discovered": 1, 00:35:01.178 "num_base_bdevs_operational": 4, 00:35:01.178 "base_bdevs_list": [ 00:35:01.178 { 00:35:01.178 "name": "BaseBdev1", 00:35:01.178 "uuid": "d47ae99f-f2e9-4b42-894d-b7731bbfaae4", 00:35:01.178 "is_configured": true, 00:35:01.178 "data_offset": 0, 00:35:01.178 "data_size": 65536 00:35:01.178 }, 00:35:01.178 { 00:35:01.178 "name": "BaseBdev2", 00:35:01.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.178 "is_configured": false, 00:35:01.178 "data_offset": 0, 00:35:01.178 "data_size": 0 00:35:01.179 }, 00:35:01.179 { 00:35:01.179 "name": "BaseBdev3", 00:35:01.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.179 "is_configured": false, 00:35:01.179 "data_offset": 0, 00:35:01.179 "data_size": 0 00:35:01.179 }, 00:35:01.179 { 00:35:01.179 "name": "BaseBdev4", 00:35:01.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.179 "is_configured": false, 00:35:01.179 "data_offset": 0, 00:35:01.179 "data_size": 0 00:35:01.179 } 00:35:01.179 ] 00:35:01.179 }' 00:35:01.179 19:28:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:01.179 19:28:17 -- common/autotest_common.sh@10 -- # set +x 00:35:02.113 19:28:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:02.113 [2024-04-18 19:28:17.969348] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:02.113 [2024-04-18 19:28:17.969594] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:35:02.113 19:28:17 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:35:02.113 19:28:17 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:02.371 [2024-04-18 19:28:18.233447] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:02.372 [2024-04-18 19:28:18.239747] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:02.372 [2024-04-18 19:28:18.240053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:02.372 [2024-04-18 19:28:18.240361] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:02.372 [2024-04-18 19:28:18.240635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:02.372 [2024-04-18 19:28:18.240825] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:02.372 [2024-04-18 19:28:18.241126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.372 19:28:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:02.630 19:28:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:02.630 "name": "Existed_Raid", 00:35:02.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.630 "strip_size_kb": 64, 00:35:02.630 "state": "configuring", 00:35:02.630 "raid_level": "concat", 00:35:02.630 "superblock": false, 00:35:02.630 "num_base_bdevs": 4, 00:35:02.630 "num_base_bdevs_discovered": 1, 00:35:02.630 "num_base_bdevs_operational": 4, 00:35:02.630 "base_bdevs_list": [ 00:35:02.630 { 00:35:02.630 "name": "BaseBdev1", 00:35:02.630 "uuid": "d47ae99f-f2e9-4b42-894d-b7731bbfaae4", 00:35:02.630 "is_configured": true, 00:35:02.630 "data_offset": 0, 00:35:02.630 "data_size": 65536 00:35:02.630 }, 00:35:02.630 { 00:35:02.630 "name": "BaseBdev2", 00:35:02.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.630 "is_configured": false, 00:35:02.630 "data_offset": 0, 00:35:02.630 "data_size": 0 00:35:02.630 }, 00:35:02.630 { 00:35:02.630 "name": "BaseBdev3", 00:35:02.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.630 "is_configured": false, 00:35:02.630 "data_offset": 0, 00:35:02.630 "data_size": 0 00:35:02.630 }, 00:35:02.630 { 00:35:02.630 "name": "BaseBdev4", 00:35:02.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.630 "is_configured": false, 00:35:02.630 "data_offset": 0, 00:35:02.630 "data_size": 0 00:35:02.630 } 00:35:02.630 ] 00:35:02.630 }' 00:35:02.630 19:28:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:02.630 19:28:18 -- common/autotest_common.sh@10 -- # set +x 00:35:03.563 19:28:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:03.822 [2024-04-18 19:28:19.547112] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:03.822 BaseBdev2 00:35:03.822 19:28:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:35:03.822 19:28:19 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:35:03.822 19:28:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:03.822 19:28:19 -- common/autotest_common.sh@887 -- # local i 00:35:03.822 19:28:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:03.822 19:28:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:03.822 19:28:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:04.091 19:28:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:04.396 [ 00:35:04.396 { 00:35:04.396 "name": "BaseBdev2", 00:35:04.396 "aliases": [ 00:35:04.396 "6b64b5fb-a0c1-4e5e-87d8-94892f080b44" 00:35:04.396 ], 00:35:04.396 "product_name": "Malloc disk", 00:35:04.396 "block_size": 512, 00:35:04.396 "num_blocks": 65536, 00:35:04.396 "uuid": "6b64b5fb-a0c1-4e5e-87d8-94892f080b44", 00:35:04.396 "assigned_rate_limits": { 00:35:04.396 "rw_ios_per_sec": 0, 00:35:04.396 "rw_mbytes_per_sec": 0, 00:35:04.396 "r_mbytes_per_sec": 0, 00:35:04.396 "w_mbytes_per_sec": 0 00:35:04.396 }, 00:35:04.396 "claimed": true, 00:35:04.396 "claim_type": "exclusive_write", 00:35:04.396 "zoned": false, 00:35:04.396 "supported_io_types": { 00:35:04.396 "read": true, 00:35:04.396 "write": true, 00:35:04.396 "unmap": true, 00:35:04.396 "write_zeroes": true, 00:35:04.396 "flush": true, 00:35:04.396 "reset": true, 00:35:04.396 "compare": false, 00:35:04.396 "compare_and_write": false, 00:35:04.396 "abort": true, 00:35:04.396 "nvme_admin": false, 00:35:04.396 "nvme_io": false 00:35:04.396 }, 00:35:04.396 "memory_domains": [ 00:35:04.396 { 00:35:04.396 "dma_device_id": "system", 00:35:04.396 "dma_device_type": 1 00:35:04.396 }, 00:35:04.396 { 00:35:04.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:04.396 "dma_device_type": 2 00:35:04.396 } 00:35:04.396 ], 00:35:04.396 "driver_specific": {} 00:35:04.396 } 00:35:04.396 ] 00:35:04.396 19:28:20 -- common/autotest_common.sh@893 -- # return 0 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.396 19:28:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:04.396 "name": "Existed_Raid", 00:35:04.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.396 "strip_size_kb": 64, 00:35:04.396 "state": "configuring", 00:35:04.396 "raid_level": "concat", 00:35:04.396 "superblock": false, 00:35:04.396 "num_base_bdevs": 4, 00:35:04.396 "num_base_bdevs_discovered": 2, 00:35:04.396 "num_base_bdevs_operational": 4, 00:35:04.396 "base_bdevs_list": [ 00:35:04.396 { 00:35:04.396 "name": "BaseBdev1", 00:35:04.396 "uuid": "d47ae99f-f2e9-4b42-894d-b7731bbfaae4", 00:35:04.396 "is_configured": true, 00:35:04.396 "data_offset": 0, 00:35:04.396 "data_size": 65536 00:35:04.396 }, 00:35:04.396 { 00:35:04.396 "name": "BaseBdev2", 00:35:04.396 "uuid": "6b64b5fb-a0c1-4e5e-87d8-94892f080b44", 00:35:04.396 "is_configured": true, 00:35:04.397 "data_offset": 0, 00:35:04.397 "data_size": 65536 00:35:04.397 }, 00:35:04.397 { 00:35:04.397 "name": "BaseBdev3", 00:35:04.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.397 "is_configured": false, 00:35:04.397 "data_offset": 0, 00:35:04.397 "data_size": 0 00:35:04.397 }, 00:35:04.397 { 00:35:04.397 "name": "BaseBdev4", 00:35:04.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.397 "is_configured": false, 00:35:04.397 "data_offset": 0, 00:35:04.397 "data_size": 0 00:35:04.397 } 00:35:04.397 ] 00:35:04.397 }' 00:35:04.397 19:28:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:04.397 19:28:20 -- common/autotest_common.sh@10 -- # set +x 00:35:05.330 19:28:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:35:05.589 [2024-04-18 19:28:21.303865] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:05.589 BaseBdev3 00:35:05.589 19:28:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:35:05.589 19:28:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:35:05.589 19:28:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:05.589 19:28:21 -- common/autotest_common.sh@887 -- # local i 00:35:05.589 19:28:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:05.589 19:28:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:05.589 19:28:21 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:05.847 19:28:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:06.105 [ 00:35:06.105 { 00:35:06.105 "name": "BaseBdev3", 00:35:06.105 "aliases": [ 00:35:06.105 "d2e1097e-8edc-48a4-89a6-fd11d2a027aa" 00:35:06.105 ], 00:35:06.105 "product_name": "Malloc disk", 00:35:06.105 "block_size": 512, 00:35:06.105 "num_blocks": 65536, 00:35:06.105 "uuid": "d2e1097e-8edc-48a4-89a6-fd11d2a027aa", 00:35:06.105 "assigned_rate_limits": { 00:35:06.105 "rw_ios_per_sec": 0, 00:35:06.105 "rw_mbytes_per_sec": 0, 00:35:06.105 "r_mbytes_per_sec": 0, 00:35:06.105 "w_mbytes_per_sec": 0 00:35:06.105 }, 00:35:06.105 "claimed": true, 00:35:06.105 "claim_type": "exclusive_write", 00:35:06.105 "zoned": false, 00:35:06.105 "supported_io_types": { 00:35:06.105 "read": true, 00:35:06.105 "write": true, 00:35:06.105 "unmap": true, 00:35:06.105 "write_zeroes": true, 00:35:06.105 "flush": true, 00:35:06.105 "reset": true, 00:35:06.105 "compare": false, 00:35:06.105 "compare_and_write": false, 00:35:06.105 "abort": true, 00:35:06.105 "nvme_admin": false, 00:35:06.105 "nvme_io": false 00:35:06.105 }, 00:35:06.105 "memory_domains": [ 00:35:06.105 { 00:35:06.105 "dma_device_id": "system", 00:35:06.105 "dma_device_type": 1 00:35:06.105 }, 00:35:06.105 { 00:35:06.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.105 "dma_device_type": 2 00:35:06.105 } 00:35:06.105 ], 00:35:06.105 "driver_specific": {} 00:35:06.105 } 00:35:06.105 ] 00:35:06.105 19:28:21 -- common/autotest_common.sh@893 -- # return 0 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:06.105 19:28:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:06.363 19:28:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:06.363 "name": "Existed_Raid", 00:35:06.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.363 "strip_size_kb": 64, 00:35:06.363 "state": "configuring", 00:35:06.363 "raid_level": "concat", 00:35:06.363 "superblock": false, 00:35:06.363 "num_base_bdevs": 4, 00:35:06.363 "num_base_bdevs_discovered": 3, 00:35:06.363 "num_base_bdevs_operational": 4, 00:35:06.363 "base_bdevs_list": [ 00:35:06.363 { 00:35:06.363 "name": "BaseBdev1", 00:35:06.363 "uuid": "d47ae99f-f2e9-4b42-894d-b7731bbfaae4", 00:35:06.363 "is_configured": true, 00:35:06.363 "data_offset": 0, 00:35:06.363 "data_size": 65536 00:35:06.363 }, 00:35:06.363 { 00:35:06.363 "name": "BaseBdev2", 00:35:06.363 "uuid": "6b64b5fb-a0c1-4e5e-87d8-94892f080b44", 00:35:06.363 "is_configured": true, 00:35:06.363 "data_offset": 0, 00:35:06.363 "data_size": 65536 00:35:06.363 }, 00:35:06.363 { 00:35:06.363 "name": "BaseBdev3", 00:35:06.363 "uuid": "d2e1097e-8edc-48a4-89a6-fd11d2a027aa", 00:35:06.363 "is_configured": true, 00:35:06.363 "data_offset": 0, 00:35:06.363 "data_size": 65536 00:35:06.363 }, 00:35:06.363 { 00:35:06.363 "name": "BaseBdev4", 00:35:06.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.363 "is_configured": false, 00:35:06.363 "data_offset": 0, 00:35:06.363 "data_size": 0 00:35:06.363 } 00:35:06.363 ] 00:35:06.363 }' 00:35:06.363 19:28:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:06.363 19:28:22 -- common/autotest_common.sh@10 -- # set +x 00:35:06.929 19:28:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:35:07.186 [2024-04-18 19:28:23.089255] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:07.186 [2024-04-18 19:28:23.089630] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:35:07.186 [2024-04-18 19:28:23.089803] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:35:07.186 [2024-04-18 19:28:23.090270] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:35:07.186 [2024-04-18 19:28:23.090829] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:35:07.186 [2024-04-18 19:28:23.091027] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:35:07.186 [2024-04-18 19:28:23.091477] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:07.186 BaseBdev4 00:35:07.444 19:28:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:35:07.444 19:28:23 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:35:07.444 19:28:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:07.444 19:28:23 -- common/autotest_common.sh@887 -- # local i 00:35:07.444 19:28:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:07.444 19:28:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:07.444 19:28:23 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:07.704 19:28:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:07.961 [ 00:35:07.961 { 00:35:07.961 "name": "BaseBdev4", 00:35:07.961 "aliases": [ 00:35:07.961 "f3832152-429e-465f-9920-aa69b822d5f8" 00:35:07.961 ], 00:35:07.961 "product_name": "Malloc disk", 00:35:07.961 "block_size": 512, 00:35:07.961 "num_blocks": 65536, 00:35:07.961 "uuid": "f3832152-429e-465f-9920-aa69b822d5f8", 00:35:07.961 "assigned_rate_limits": { 00:35:07.961 "rw_ios_per_sec": 0, 00:35:07.961 "rw_mbytes_per_sec": 0, 00:35:07.961 "r_mbytes_per_sec": 0, 00:35:07.961 "w_mbytes_per_sec": 0 00:35:07.962 }, 00:35:07.962 "claimed": true, 00:35:07.962 "claim_type": "exclusive_write", 00:35:07.962 "zoned": false, 00:35:07.962 "supported_io_types": { 00:35:07.962 "read": true, 00:35:07.962 "write": true, 00:35:07.962 "unmap": true, 00:35:07.962 "write_zeroes": true, 00:35:07.962 "flush": true, 00:35:07.962 "reset": true, 00:35:07.962 "compare": false, 00:35:07.962 "compare_and_write": false, 00:35:07.962 "abort": true, 00:35:07.962 "nvme_admin": false, 00:35:07.962 "nvme_io": false 00:35:07.962 }, 00:35:07.962 "memory_domains": [ 00:35:07.962 { 00:35:07.962 "dma_device_id": "system", 00:35:07.962 "dma_device_type": 1 00:35:07.962 }, 00:35:07.962 { 00:35:07.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:07.962 "dma_device_type": 2 00:35:07.962 } 00:35:07.962 ], 00:35:07.962 "driver_specific": {} 00:35:07.962 } 00:35:07.962 ] 00:35:07.962 19:28:23 -- common/autotest_common.sh@893 -- # return 0 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.962 19:28:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:08.219 19:28:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:08.219 "name": "Existed_Raid", 00:35:08.219 "uuid": "90bb2dfe-90ec-4d0f-ad24-c5c9e0556461", 00:35:08.220 "strip_size_kb": 64, 00:35:08.220 "state": "online", 00:35:08.220 "raid_level": "concat", 00:35:08.220 "superblock": false, 00:35:08.220 "num_base_bdevs": 4, 00:35:08.220 "num_base_bdevs_discovered": 4, 00:35:08.220 "num_base_bdevs_operational": 4, 00:35:08.220 "base_bdevs_list": [ 00:35:08.220 { 00:35:08.220 "name": "BaseBdev1", 00:35:08.220 "uuid": "d47ae99f-f2e9-4b42-894d-b7731bbfaae4", 00:35:08.220 "is_configured": true, 00:35:08.220 "data_offset": 0, 00:35:08.220 "data_size": 65536 00:35:08.220 }, 00:35:08.220 { 00:35:08.220 "name": "BaseBdev2", 00:35:08.220 "uuid": "6b64b5fb-a0c1-4e5e-87d8-94892f080b44", 00:35:08.220 "is_configured": true, 00:35:08.220 "data_offset": 0, 00:35:08.220 "data_size": 65536 00:35:08.220 }, 00:35:08.220 { 00:35:08.220 "name": "BaseBdev3", 00:35:08.220 "uuid": "d2e1097e-8edc-48a4-89a6-fd11d2a027aa", 00:35:08.220 "is_configured": true, 00:35:08.220 "data_offset": 0, 00:35:08.220 "data_size": 65536 00:35:08.220 }, 00:35:08.220 { 00:35:08.220 "name": "BaseBdev4", 00:35:08.220 "uuid": "f3832152-429e-465f-9920-aa69b822d5f8", 00:35:08.220 "is_configured": true, 00:35:08.220 "data_offset": 0, 00:35:08.220 "data_size": 65536 00:35:08.220 } 00:35:08.220 ] 00:35:08.220 }' 00:35:08.220 19:28:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:08.220 19:28:23 -- common/autotest_common.sh@10 -- # set +x 00:35:08.786 19:28:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:09.045 [2024-04-18 19:28:24.794285] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:09.045 [2024-04-18 19:28:24.794690] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:09.045 [2024-04-18 19:28:24.794998] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.045 19:28:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:09.303 19:28:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:09.303 "name": "Existed_Raid", 00:35:09.303 "uuid": "90bb2dfe-90ec-4d0f-ad24-c5c9e0556461", 00:35:09.303 "strip_size_kb": 64, 00:35:09.303 "state": "offline", 00:35:09.303 "raid_level": "concat", 00:35:09.303 "superblock": false, 00:35:09.303 "num_base_bdevs": 4, 00:35:09.303 "num_base_bdevs_discovered": 3, 00:35:09.303 "num_base_bdevs_operational": 3, 00:35:09.303 "base_bdevs_list": [ 00:35:09.303 { 00:35:09.303 "name": null, 00:35:09.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.303 "is_configured": false, 00:35:09.303 "data_offset": 0, 00:35:09.303 "data_size": 65536 00:35:09.303 }, 00:35:09.303 { 00:35:09.303 "name": "BaseBdev2", 00:35:09.303 "uuid": "6b64b5fb-a0c1-4e5e-87d8-94892f080b44", 00:35:09.303 "is_configured": true, 00:35:09.303 "data_offset": 0, 00:35:09.303 "data_size": 65536 00:35:09.303 }, 00:35:09.303 { 00:35:09.303 "name": "BaseBdev3", 00:35:09.303 "uuid": "d2e1097e-8edc-48a4-89a6-fd11d2a027aa", 00:35:09.303 "is_configured": true, 00:35:09.303 "data_offset": 0, 00:35:09.303 "data_size": 65536 00:35:09.303 }, 00:35:09.303 { 00:35:09.303 "name": "BaseBdev4", 00:35:09.303 "uuid": "f3832152-429e-465f-9920-aa69b822d5f8", 00:35:09.303 "is_configured": true, 00:35:09.303 "data_offset": 0, 00:35:09.303 "data_size": 65536 00:35:09.303 } 00:35:09.303 ] 00:35:09.303 }' 00:35:09.303 19:28:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:09.303 19:28:25 -- common/autotest_common.sh@10 -- # set +x 00:35:10.237 19:28:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:35:10.237 19:28:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:10.237 19:28:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:10.237 19:28:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.495 19:28:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:10.495 19:28:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:10.495 19:28:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:10.495 [2024-04-18 19:28:26.372076] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:10.754 19:28:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:10.754 19:28:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:10.754 19:28:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.754 19:28:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:11.020 19:28:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:11.020 19:28:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:11.020 19:28:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:35:11.293 [2024-04-18 19:28:27.020732] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:11.293 19:28:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:11.293 19:28:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:11.293 19:28:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:11.293 19:28:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:11.551 19:28:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:11.551 19:28:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:11.551 19:28:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:35:11.809 [2024-04-18 19:28:27.542589] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:11.810 [2024-04-18 19:28:27.542893] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:35:11.810 19:28:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:11.810 19:28:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:11.810 19:28:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:11.810 19:28:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:35:12.068 19:28:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:35:12.068 19:28:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:35:12.068 19:28:27 -- bdev/bdev_raid.sh@287 -- # killprocess 129468 00:35:12.068 19:28:27 -- common/autotest_common.sh@936 -- # '[' -z 129468 ']' 00:35:12.068 19:28:27 -- common/autotest_common.sh@940 -- # kill -0 129468 00:35:12.068 19:28:27 -- common/autotest_common.sh@941 -- # uname 00:35:12.068 19:28:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:12.068 19:28:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129468 00:35:12.068 killing process with pid 129468 00:35:12.068 19:28:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:12.068 19:28:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:12.068 19:28:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129468' 00:35:12.068 19:28:27 -- common/autotest_common.sh@955 -- # kill 129468 00:35:12.068 19:28:27 -- common/autotest_common.sh@960 -- # wait 129468 00:35:12.068 [2024-04-18 19:28:27.877392] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:12.068 [2024-04-18 19:28:27.877551] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:13.524 ************************************ 00:35:13.524 END TEST raid_state_function_test 00:35:13.524 ************************************ 00:35:13.524 19:28:29 -- bdev/bdev_raid.sh@289 -- # return 0 00:35:13.524 00:35:13.524 real 0m15.803s 00:35:13.524 user 0m27.752s 00:35:13.524 sys 0m2.036s 00:35:13.524 19:28:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:13.524 19:28:29 -- common/autotest_common.sh@10 -- # set +x 00:35:13.524 19:28:29 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:35:13.524 19:28:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:35:13.524 19:28:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:13.525 19:28:29 -- common/autotest_common.sh@10 -- # set +x 00:35:13.525 ************************************ 00:35:13.525 START TEST raid_state_function_test_sb 00:35:13.525 ************************************ 00:35:13.525 19:28:29 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 4 true 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=129964 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129964' 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:13.525 Process raid pid: 129964 00:35:13.525 19:28:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129964 /var/tmp/spdk-raid.sock 00:35:13.525 19:28:29 -- common/autotest_common.sh@817 -- # '[' -z 129964 ']' 00:35:13.525 19:28:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:13.525 19:28:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:13.525 19:28:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:13.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:13.525 19:28:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:13.525 19:28:29 -- common/autotest_common.sh@10 -- # set +x 00:35:13.525 [2024-04-18 19:28:29.324484] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:35:13.525 [2024-04-18 19:28:29.324781] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.783 [2024-04-18 19:28:29.492758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.783 [2024-04-18 19:28:29.704825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.351 [2024-04-18 19:28:29.986678] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:14.609 19:28:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:14.609 19:28:30 -- common/autotest_common.sh@850 -- # return 0 00:35:14.609 19:28:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:14.867 [2024-04-18 19:28:30.549273] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:14.867 [2024-04-18 19:28:30.549476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:14.867 [2024-04-18 19:28:30.549555] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:14.867 [2024-04-18 19:28:30.549606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:14.867 [2024-04-18 19:28:30.549633] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:14.867 [2024-04-18 19:28:30.549697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:14.867 [2024-04-18 19:28:30.549879] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:14.867 [2024-04-18 19:28:30.549927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:14.867 19:28:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:15.125 19:28:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:15.125 "name": "Existed_Raid", 00:35:15.125 "uuid": "c2ba5cac-93b8-4b4c-89d3-8abc4367aca8", 00:35:15.125 "strip_size_kb": 64, 00:35:15.125 "state": "configuring", 00:35:15.125 "raid_level": "concat", 00:35:15.125 "superblock": true, 00:35:15.125 "num_base_bdevs": 4, 00:35:15.125 "num_base_bdevs_discovered": 0, 00:35:15.125 "num_base_bdevs_operational": 4, 00:35:15.125 "base_bdevs_list": [ 00:35:15.125 { 00:35:15.125 "name": "BaseBdev1", 00:35:15.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.125 "is_configured": false, 00:35:15.125 "data_offset": 0, 00:35:15.125 "data_size": 0 00:35:15.125 }, 00:35:15.125 { 00:35:15.125 "name": "BaseBdev2", 00:35:15.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.125 "is_configured": false, 00:35:15.125 "data_offset": 0, 00:35:15.125 "data_size": 0 00:35:15.125 }, 00:35:15.125 { 00:35:15.125 "name": "BaseBdev3", 00:35:15.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.125 "is_configured": false, 00:35:15.125 "data_offset": 0, 00:35:15.125 "data_size": 0 00:35:15.125 }, 00:35:15.125 { 00:35:15.125 "name": "BaseBdev4", 00:35:15.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.125 "is_configured": false, 00:35:15.125 "data_offset": 0, 00:35:15.125 "data_size": 0 00:35:15.125 } 00:35:15.125 ] 00:35:15.125 }' 00:35:15.125 19:28:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:15.125 19:28:30 -- common/autotest_common.sh@10 -- # set +x 00:35:15.690 19:28:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:15.949 [2024-04-18 19:28:31.857367] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:15.949 [2024-04-18 19:28:31.857523] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:35:15.949 19:28:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:16.219 [2024-04-18 19:28:32.057480] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:16.219 [2024-04-18 19:28:32.057725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:16.219 [2024-04-18 19:28:32.057799] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:16.219 [2024-04-18 19:28:32.057851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:16.219 [2024-04-18 19:28:32.057915] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:16.219 [2024-04-18 19:28:32.057977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:16.219 [2024-04-18 19:28:32.058119] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:16.219 [2024-04-18 19:28:32.058170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:16.219 19:28:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:16.504 [2024-04-18 19:28:32.295895] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:16.504 BaseBdev1 00:35:16.504 19:28:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:35:16.504 19:28:32 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:35:16.504 19:28:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:16.504 19:28:32 -- common/autotest_common.sh@887 -- # local i 00:35:16.504 19:28:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:16.504 19:28:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:16.504 19:28:32 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:16.763 19:28:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:17.021 [ 00:35:17.021 { 00:35:17.021 "name": "BaseBdev1", 00:35:17.021 "aliases": [ 00:35:17.021 "dce1b769-302b-4ce0-ab8b-cce44ff3debd" 00:35:17.021 ], 00:35:17.021 "product_name": "Malloc disk", 00:35:17.021 "block_size": 512, 00:35:17.021 "num_blocks": 65536, 00:35:17.021 "uuid": "dce1b769-302b-4ce0-ab8b-cce44ff3debd", 00:35:17.021 "assigned_rate_limits": { 00:35:17.021 "rw_ios_per_sec": 0, 00:35:17.021 "rw_mbytes_per_sec": 0, 00:35:17.021 "r_mbytes_per_sec": 0, 00:35:17.021 "w_mbytes_per_sec": 0 00:35:17.021 }, 00:35:17.021 "claimed": true, 00:35:17.021 "claim_type": "exclusive_write", 00:35:17.021 "zoned": false, 00:35:17.021 "supported_io_types": { 00:35:17.021 "read": true, 00:35:17.021 "write": true, 00:35:17.021 "unmap": true, 00:35:17.021 "write_zeroes": true, 00:35:17.021 "flush": true, 00:35:17.021 "reset": true, 00:35:17.021 "compare": false, 00:35:17.021 "compare_and_write": false, 00:35:17.021 "abort": true, 00:35:17.021 "nvme_admin": false, 00:35:17.021 "nvme_io": false 00:35:17.021 }, 00:35:17.021 "memory_domains": [ 00:35:17.021 { 00:35:17.021 "dma_device_id": "system", 00:35:17.021 "dma_device_type": 1 00:35:17.021 }, 00:35:17.021 { 00:35:17.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:17.021 "dma_device_type": 2 00:35:17.021 } 00:35:17.021 ], 00:35:17.021 "driver_specific": {} 00:35:17.021 } 00:35:17.021 ] 00:35:17.021 19:28:32 -- common/autotest_common.sh@893 -- # return 0 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:17.021 19:28:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:17.280 19:28:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:17.280 "name": "Existed_Raid", 00:35:17.280 "uuid": "d2329023-5a43-4833-a72d-6d96672a31cc", 00:35:17.280 "strip_size_kb": 64, 00:35:17.280 "state": "configuring", 00:35:17.280 "raid_level": "concat", 00:35:17.280 "superblock": true, 00:35:17.280 "num_base_bdevs": 4, 00:35:17.280 "num_base_bdevs_discovered": 1, 00:35:17.280 "num_base_bdevs_operational": 4, 00:35:17.280 "base_bdevs_list": [ 00:35:17.280 { 00:35:17.280 "name": "BaseBdev1", 00:35:17.280 "uuid": "dce1b769-302b-4ce0-ab8b-cce44ff3debd", 00:35:17.280 "is_configured": true, 00:35:17.280 "data_offset": 2048, 00:35:17.280 "data_size": 63488 00:35:17.280 }, 00:35:17.280 { 00:35:17.280 "name": "BaseBdev2", 00:35:17.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.280 "is_configured": false, 00:35:17.280 "data_offset": 0, 00:35:17.280 "data_size": 0 00:35:17.280 }, 00:35:17.280 { 00:35:17.280 "name": "BaseBdev3", 00:35:17.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.280 "is_configured": false, 00:35:17.280 "data_offset": 0, 00:35:17.280 "data_size": 0 00:35:17.280 }, 00:35:17.280 { 00:35:17.280 "name": "BaseBdev4", 00:35:17.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.280 "is_configured": false, 00:35:17.280 "data_offset": 0, 00:35:17.280 "data_size": 0 00:35:17.280 } 00:35:17.280 ] 00:35:17.280 }' 00:35:17.280 19:28:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:17.280 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:35:17.845 19:28:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:18.412 [2024-04-18 19:28:34.040346] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:18.412 [2024-04-18 19:28:34.040589] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:35:18.412 19:28:34 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:35:18.412 19:28:34 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:18.671 19:28:34 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:18.930 BaseBdev1 00:35:18.930 19:28:34 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:35:18.930 19:28:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:35:18.930 19:28:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:18.930 19:28:34 -- common/autotest_common.sh@887 -- # local i 00:35:18.930 19:28:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:18.930 19:28:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:18.930 19:28:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:19.188 19:28:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:19.188 [ 00:35:19.188 { 00:35:19.188 "name": "BaseBdev1", 00:35:19.188 "aliases": [ 00:35:19.188 "4372c729-abef-4681-b1fe-de91898218d3" 00:35:19.188 ], 00:35:19.188 "product_name": "Malloc disk", 00:35:19.188 "block_size": 512, 00:35:19.188 "num_blocks": 65536, 00:35:19.188 "uuid": "4372c729-abef-4681-b1fe-de91898218d3", 00:35:19.188 "assigned_rate_limits": { 00:35:19.188 "rw_ios_per_sec": 0, 00:35:19.188 "rw_mbytes_per_sec": 0, 00:35:19.188 "r_mbytes_per_sec": 0, 00:35:19.188 "w_mbytes_per_sec": 0 00:35:19.188 }, 00:35:19.188 "claimed": false, 00:35:19.188 "zoned": false, 00:35:19.188 "supported_io_types": { 00:35:19.188 "read": true, 00:35:19.188 "write": true, 00:35:19.188 "unmap": true, 00:35:19.188 "write_zeroes": true, 00:35:19.188 "flush": true, 00:35:19.188 "reset": true, 00:35:19.188 "compare": false, 00:35:19.188 "compare_and_write": false, 00:35:19.188 "abort": true, 00:35:19.188 "nvme_admin": false, 00:35:19.188 "nvme_io": false 00:35:19.188 }, 00:35:19.188 "memory_domains": [ 00:35:19.188 { 00:35:19.188 "dma_device_id": "system", 00:35:19.188 "dma_device_type": 1 00:35:19.188 }, 00:35:19.188 { 00:35:19.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:19.188 "dma_device_type": 2 00:35:19.188 } 00:35:19.188 ], 00:35:19.188 "driver_specific": {} 00:35:19.188 } 00:35:19.188 ] 00:35:19.188 19:28:35 -- common/autotest_common.sh@893 -- # return 0 00:35:19.188 19:28:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:19.447 [2024-04-18 19:28:35.350036] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:19.447 [2024-04-18 19:28:35.352143] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:19.447 [2024-04-18 19:28:35.352234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:19.447 [2024-04-18 19:28:35.352246] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:19.448 [2024-04-18 19:28:35.352272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:19.448 [2024-04-18 19:28:35.352281] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:19.448 [2024-04-18 19:28:35.352299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.448 19:28:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:20.015 19:28:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:20.015 "name": "Existed_Raid", 00:35:20.015 "uuid": "95d2f449-09e2-439d-a08a-f386af7367cb", 00:35:20.015 "strip_size_kb": 64, 00:35:20.015 "state": "configuring", 00:35:20.015 "raid_level": "concat", 00:35:20.015 "superblock": true, 00:35:20.015 "num_base_bdevs": 4, 00:35:20.015 "num_base_bdevs_discovered": 1, 00:35:20.015 "num_base_bdevs_operational": 4, 00:35:20.015 "base_bdevs_list": [ 00:35:20.015 { 00:35:20.015 "name": "BaseBdev1", 00:35:20.015 "uuid": "4372c729-abef-4681-b1fe-de91898218d3", 00:35:20.015 "is_configured": true, 00:35:20.015 "data_offset": 2048, 00:35:20.015 "data_size": 63488 00:35:20.015 }, 00:35:20.015 { 00:35:20.015 "name": "BaseBdev2", 00:35:20.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:20.015 "is_configured": false, 00:35:20.015 "data_offset": 0, 00:35:20.015 "data_size": 0 00:35:20.015 }, 00:35:20.015 { 00:35:20.015 "name": "BaseBdev3", 00:35:20.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:20.015 "is_configured": false, 00:35:20.015 "data_offset": 0, 00:35:20.015 "data_size": 0 00:35:20.015 }, 00:35:20.015 { 00:35:20.015 "name": "BaseBdev4", 00:35:20.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:20.015 "is_configured": false, 00:35:20.015 "data_offset": 0, 00:35:20.015 "data_size": 0 00:35:20.015 } 00:35:20.015 ] 00:35:20.015 }' 00:35:20.015 19:28:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:20.015 19:28:35 -- common/autotest_common.sh@10 -- # set +x 00:35:20.584 19:28:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:20.843 [2024-04-18 19:28:36.720459] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:20.843 BaseBdev2 00:35:20.843 19:28:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:35:20.843 19:28:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:35:20.843 19:28:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:20.843 19:28:36 -- common/autotest_common.sh@887 -- # local i 00:35:20.843 19:28:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:20.843 19:28:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:20.843 19:28:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:21.102 19:28:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:21.361 [ 00:35:21.361 { 00:35:21.361 "name": "BaseBdev2", 00:35:21.361 "aliases": [ 00:35:21.361 "2b6f1579-6115-42de-a70f-4b8926aa35b0" 00:35:21.361 ], 00:35:21.361 "product_name": "Malloc disk", 00:35:21.361 "block_size": 512, 00:35:21.361 "num_blocks": 65536, 00:35:21.361 "uuid": "2b6f1579-6115-42de-a70f-4b8926aa35b0", 00:35:21.361 "assigned_rate_limits": { 00:35:21.361 "rw_ios_per_sec": 0, 00:35:21.361 "rw_mbytes_per_sec": 0, 00:35:21.361 "r_mbytes_per_sec": 0, 00:35:21.361 "w_mbytes_per_sec": 0 00:35:21.361 }, 00:35:21.361 "claimed": true, 00:35:21.361 "claim_type": "exclusive_write", 00:35:21.361 "zoned": false, 00:35:21.361 "supported_io_types": { 00:35:21.361 "read": true, 00:35:21.361 "write": true, 00:35:21.361 "unmap": true, 00:35:21.361 "write_zeroes": true, 00:35:21.361 "flush": true, 00:35:21.361 "reset": true, 00:35:21.361 "compare": false, 00:35:21.361 "compare_and_write": false, 00:35:21.361 "abort": true, 00:35:21.361 "nvme_admin": false, 00:35:21.361 "nvme_io": false 00:35:21.361 }, 00:35:21.361 "memory_domains": [ 00:35:21.361 { 00:35:21.361 "dma_device_id": "system", 00:35:21.361 "dma_device_type": 1 00:35:21.361 }, 00:35:21.361 { 00:35:21.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:21.361 "dma_device_type": 2 00:35:21.361 } 00:35:21.361 ], 00:35:21.361 "driver_specific": {} 00:35:21.361 } 00:35:21.361 ] 00:35:21.361 19:28:37 -- common/autotest_common.sh@893 -- # return 0 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:21.361 19:28:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:21.628 19:28:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:21.628 "name": "Existed_Raid", 00:35:21.628 "uuid": "95d2f449-09e2-439d-a08a-f386af7367cb", 00:35:21.628 "strip_size_kb": 64, 00:35:21.628 "state": "configuring", 00:35:21.628 "raid_level": "concat", 00:35:21.628 "superblock": true, 00:35:21.628 "num_base_bdevs": 4, 00:35:21.628 "num_base_bdevs_discovered": 2, 00:35:21.628 "num_base_bdevs_operational": 4, 00:35:21.628 "base_bdevs_list": [ 00:35:21.628 { 00:35:21.628 "name": "BaseBdev1", 00:35:21.628 "uuid": "4372c729-abef-4681-b1fe-de91898218d3", 00:35:21.628 "is_configured": true, 00:35:21.628 "data_offset": 2048, 00:35:21.628 "data_size": 63488 00:35:21.628 }, 00:35:21.628 { 00:35:21.628 "name": "BaseBdev2", 00:35:21.628 "uuid": "2b6f1579-6115-42de-a70f-4b8926aa35b0", 00:35:21.628 "is_configured": true, 00:35:21.628 "data_offset": 2048, 00:35:21.628 "data_size": 63488 00:35:21.628 }, 00:35:21.628 { 00:35:21.628 "name": "BaseBdev3", 00:35:21.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:21.628 "is_configured": false, 00:35:21.628 "data_offset": 0, 00:35:21.628 "data_size": 0 00:35:21.628 }, 00:35:21.628 { 00:35:21.628 "name": "BaseBdev4", 00:35:21.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:21.628 "is_configured": false, 00:35:21.628 "data_offset": 0, 00:35:21.628 "data_size": 0 00:35:21.628 } 00:35:21.628 ] 00:35:21.628 }' 00:35:21.628 19:28:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:21.628 19:28:37 -- common/autotest_common.sh@10 -- # set +x 00:35:22.578 19:28:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:35:22.578 [2024-04-18 19:28:38.442054] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:22.578 BaseBdev3 00:35:22.578 19:28:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:35:22.578 19:28:38 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:35:22.578 19:28:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:22.578 19:28:38 -- common/autotest_common.sh@887 -- # local i 00:35:22.578 19:28:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:22.578 19:28:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:22.578 19:28:38 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:22.837 19:28:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:23.094 [ 00:35:23.094 { 00:35:23.094 "name": "BaseBdev3", 00:35:23.094 "aliases": [ 00:35:23.094 "e9a4cb58-3edb-4934-936a-ec2a77c1650c" 00:35:23.094 ], 00:35:23.094 "product_name": "Malloc disk", 00:35:23.094 "block_size": 512, 00:35:23.094 "num_blocks": 65536, 00:35:23.094 "uuid": "e9a4cb58-3edb-4934-936a-ec2a77c1650c", 00:35:23.094 "assigned_rate_limits": { 00:35:23.095 "rw_ios_per_sec": 0, 00:35:23.095 "rw_mbytes_per_sec": 0, 00:35:23.095 "r_mbytes_per_sec": 0, 00:35:23.095 "w_mbytes_per_sec": 0 00:35:23.095 }, 00:35:23.095 "claimed": true, 00:35:23.095 "claim_type": "exclusive_write", 00:35:23.095 "zoned": false, 00:35:23.095 "supported_io_types": { 00:35:23.095 "read": true, 00:35:23.095 "write": true, 00:35:23.095 "unmap": true, 00:35:23.095 "write_zeroes": true, 00:35:23.095 "flush": true, 00:35:23.095 "reset": true, 00:35:23.095 "compare": false, 00:35:23.095 "compare_and_write": false, 00:35:23.095 "abort": true, 00:35:23.095 "nvme_admin": false, 00:35:23.095 "nvme_io": false 00:35:23.095 }, 00:35:23.095 "memory_domains": [ 00:35:23.095 { 00:35:23.095 "dma_device_id": "system", 00:35:23.095 "dma_device_type": 1 00:35:23.095 }, 00:35:23.095 { 00:35:23.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:23.095 "dma_device_type": 2 00:35:23.095 } 00:35:23.095 ], 00:35:23.095 "driver_specific": {} 00:35:23.095 } 00:35:23.095 ] 00:35:23.095 19:28:38 -- common/autotest_common.sh@893 -- # return 0 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:23.095 19:28:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:23.353 19:28:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:23.353 "name": "Existed_Raid", 00:35:23.353 "uuid": "95d2f449-09e2-439d-a08a-f386af7367cb", 00:35:23.353 "strip_size_kb": 64, 00:35:23.353 "state": "configuring", 00:35:23.353 "raid_level": "concat", 00:35:23.353 "superblock": true, 00:35:23.353 "num_base_bdevs": 4, 00:35:23.353 "num_base_bdevs_discovered": 3, 00:35:23.353 "num_base_bdevs_operational": 4, 00:35:23.353 "base_bdevs_list": [ 00:35:23.353 { 00:35:23.353 "name": "BaseBdev1", 00:35:23.353 "uuid": "4372c729-abef-4681-b1fe-de91898218d3", 00:35:23.353 "is_configured": true, 00:35:23.353 "data_offset": 2048, 00:35:23.353 "data_size": 63488 00:35:23.353 }, 00:35:23.353 { 00:35:23.353 "name": "BaseBdev2", 00:35:23.353 "uuid": "2b6f1579-6115-42de-a70f-4b8926aa35b0", 00:35:23.353 "is_configured": true, 00:35:23.353 "data_offset": 2048, 00:35:23.353 "data_size": 63488 00:35:23.353 }, 00:35:23.353 { 00:35:23.353 "name": "BaseBdev3", 00:35:23.353 "uuid": "e9a4cb58-3edb-4934-936a-ec2a77c1650c", 00:35:23.353 "is_configured": true, 00:35:23.353 "data_offset": 2048, 00:35:23.353 "data_size": 63488 00:35:23.353 }, 00:35:23.353 { 00:35:23.353 "name": "BaseBdev4", 00:35:23.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:23.353 "is_configured": false, 00:35:23.353 "data_offset": 0, 00:35:23.353 "data_size": 0 00:35:23.353 } 00:35:23.353 ] 00:35:23.353 }' 00:35:23.353 19:28:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:23.353 19:28:39 -- common/autotest_common.sh@10 -- # set +x 00:35:24.289 19:28:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:35:24.548 [2024-04-18 19:28:40.279029] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:24.548 [2024-04-18 19:28:40.279245] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:35:24.548 [2024-04-18 19:28:40.279259] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:24.548 [2024-04-18 19:28:40.279424] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:35:24.548 [2024-04-18 19:28:40.279775] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:35:24.548 [2024-04-18 19:28:40.279801] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:35:24.548 [2024-04-18 19:28:40.279946] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:24.548 BaseBdev4 00:35:24.548 19:28:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:35:24.548 19:28:40 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:35:24.548 19:28:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:24.548 19:28:40 -- common/autotest_common.sh@887 -- # local i 00:35:24.548 19:28:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:24.549 19:28:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:24.549 19:28:40 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:24.807 19:28:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:25.066 [ 00:35:25.066 { 00:35:25.066 "name": "BaseBdev4", 00:35:25.066 "aliases": [ 00:35:25.066 "fd2b4406-a1c4-4cfe-ae1e-47269f66269d" 00:35:25.066 ], 00:35:25.066 "product_name": "Malloc disk", 00:35:25.066 "block_size": 512, 00:35:25.066 "num_blocks": 65536, 00:35:25.066 "uuid": "fd2b4406-a1c4-4cfe-ae1e-47269f66269d", 00:35:25.066 "assigned_rate_limits": { 00:35:25.066 "rw_ios_per_sec": 0, 00:35:25.066 "rw_mbytes_per_sec": 0, 00:35:25.066 "r_mbytes_per_sec": 0, 00:35:25.066 "w_mbytes_per_sec": 0 00:35:25.066 }, 00:35:25.066 "claimed": true, 00:35:25.066 "claim_type": "exclusive_write", 00:35:25.066 "zoned": false, 00:35:25.066 "supported_io_types": { 00:35:25.066 "read": true, 00:35:25.066 "write": true, 00:35:25.066 "unmap": true, 00:35:25.066 "write_zeroes": true, 00:35:25.066 "flush": true, 00:35:25.066 "reset": true, 00:35:25.066 "compare": false, 00:35:25.066 "compare_and_write": false, 00:35:25.066 "abort": true, 00:35:25.066 "nvme_admin": false, 00:35:25.066 "nvme_io": false 00:35:25.066 }, 00:35:25.066 "memory_domains": [ 00:35:25.066 { 00:35:25.066 "dma_device_id": "system", 00:35:25.066 "dma_device_type": 1 00:35:25.066 }, 00:35:25.066 { 00:35:25.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:25.066 "dma_device_type": 2 00:35:25.066 } 00:35:25.066 ], 00:35:25.066 "driver_specific": {} 00:35:25.066 } 00:35:25.066 ] 00:35:25.066 19:28:40 -- common/autotest_common.sh@893 -- # return 0 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:25.066 19:28:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:25.325 19:28:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:25.325 "name": "Existed_Raid", 00:35:25.325 "uuid": "95d2f449-09e2-439d-a08a-f386af7367cb", 00:35:25.325 "strip_size_kb": 64, 00:35:25.325 "state": "online", 00:35:25.325 "raid_level": "concat", 00:35:25.325 "superblock": true, 00:35:25.325 "num_base_bdevs": 4, 00:35:25.325 "num_base_bdevs_discovered": 4, 00:35:25.325 "num_base_bdevs_operational": 4, 00:35:25.325 "base_bdevs_list": [ 00:35:25.325 { 00:35:25.325 "name": "BaseBdev1", 00:35:25.325 "uuid": "4372c729-abef-4681-b1fe-de91898218d3", 00:35:25.325 "is_configured": true, 00:35:25.325 "data_offset": 2048, 00:35:25.325 "data_size": 63488 00:35:25.325 }, 00:35:25.325 { 00:35:25.325 "name": "BaseBdev2", 00:35:25.325 "uuid": "2b6f1579-6115-42de-a70f-4b8926aa35b0", 00:35:25.325 "is_configured": true, 00:35:25.325 "data_offset": 2048, 00:35:25.325 "data_size": 63488 00:35:25.325 }, 00:35:25.325 { 00:35:25.325 "name": "BaseBdev3", 00:35:25.325 "uuid": "e9a4cb58-3edb-4934-936a-ec2a77c1650c", 00:35:25.325 "is_configured": true, 00:35:25.325 "data_offset": 2048, 00:35:25.325 "data_size": 63488 00:35:25.325 }, 00:35:25.325 { 00:35:25.325 "name": "BaseBdev4", 00:35:25.325 "uuid": "fd2b4406-a1c4-4cfe-ae1e-47269f66269d", 00:35:25.325 "is_configured": true, 00:35:25.325 "data_offset": 2048, 00:35:25.325 "data_size": 63488 00:35:25.325 } 00:35:25.325 ] 00:35:25.325 }' 00:35:25.325 19:28:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:25.325 19:28:41 -- common/autotest_common.sh@10 -- # set +x 00:35:25.890 19:28:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:26.149 [2024-04-18 19:28:41.899755] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:26.149 [2024-04-18 19:28:41.899796] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:26.149 [2024-04-18 19:28:41.899847] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@197 -- # return 1 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.149 19:28:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:26.408 19:28:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:26.408 "name": "Existed_Raid", 00:35:26.408 "uuid": "95d2f449-09e2-439d-a08a-f386af7367cb", 00:35:26.408 "strip_size_kb": 64, 00:35:26.408 "state": "offline", 00:35:26.408 "raid_level": "concat", 00:35:26.408 "superblock": true, 00:35:26.409 "num_base_bdevs": 4, 00:35:26.409 "num_base_bdevs_discovered": 3, 00:35:26.409 "num_base_bdevs_operational": 3, 00:35:26.409 "base_bdevs_list": [ 00:35:26.409 { 00:35:26.409 "name": null, 00:35:26.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:26.409 "is_configured": false, 00:35:26.409 "data_offset": 2048, 00:35:26.409 "data_size": 63488 00:35:26.409 }, 00:35:26.409 { 00:35:26.409 "name": "BaseBdev2", 00:35:26.409 "uuid": "2b6f1579-6115-42de-a70f-4b8926aa35b0", 00:35:26.409 "is_configured": true, 00:35:26.409 "data_offset": 2048, 00:35:26.409 "data_size": 63488 00:35:26.409 }, 00:35:26.409 { 00:35:26.409 "name": "BaseBdev3", 00:35:26.409 "uuid": "e9a4cb58-3edb-4934-936a-ec2a77c1650c", 00:35:26.409 "is_configured": true, 00:35:26.409 "data_offset": 2048, 00:35:26.409 "data_size": 63488 00:35:26.409 }, 00:35:26.409 { 00:35:26.409 "name": "BaseBdev4", 00:35:26.409 "uuid": "fd2b4406-a1c4-4cfe-ae1e-47269f66269d", 00:35:26.409 "is_configured": true, 00:35:26.409 "data_offset": 2048, 00:35:26.409 "data_size": 63488 00:35:26.409 } 00:35:26.409 ] 00:35:26.409 }' 00:35:26.409 19:28:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:26.409 19:28:42 -- common/autotest_common.sh@10 -- # set +x 00:35:27.345 19:28:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:35:27.345 19:28:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:27.345 19:28:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.345 19:28:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:27.345 19:28:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:27.345 19:28:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:27.345 19:28:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:27.603 [2024-04-18 19:28:43.508796] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:27.862 19:28:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:27.862 19:28:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:27.862 19:28:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.862 19:28:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:28.122 19:28:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:28.122 19:28:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:28.122 19:28:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:35:28.381 [2024-04-18 19:28:44.087735] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:28.381 19:28:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:28.381 19:28:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:28.381 19:28:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:28.381 19:28:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:28.639 19:28:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:28.639 19:28:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:28.639 19:28:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:35:28.898 [2024-04-18 19:28:44.733266] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:28.898 [2024-04-18 19:28:44.733329] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:35:29.155 19:28:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:29.155 19:28:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:29.155 19:28:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:29.155 19:28:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:35:29.414 19:28:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:35:29.414 19:28:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:35:29.414 19:28:45 -- bdev/bdev_raid.sh@287 -- # killprocess 129964 00:35:29.414 19:28:45 -- common/autotest_common.sh@936 -- # '[' -z 129964 ']' 00:35:29.414 19:28:45 -- common/autotest_common.sh@940 -- # kill -0 129964 00:35:29.414 19:28:45 -- common/autotest_common.sh@941 -- # uname 00:35:29.414 19:28:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:29.414 19:28:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129964 00:35:29.414 killing process with pid 129964 00:35:29.414 19:28:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:29.414 19:28:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:29.414 19:28:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129964' 00:35:29.414 19:28:45 -- common/autotest_common.sh@955 -- # kill 129964 00:35:29.414 19:28:45 -- common/autotest_common.sh@960 -- # wait 129964 00:35:29.414 [2024-04-18 19:28:45.149243] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:29.414 [2024-04-18 19:28:45.149367] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:30.833 ************************************ 00:35:30.833 END TEST raid_state_function_test_sb 00:35:30.833 ************************************ 00:35:30.833 19:28:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:35:30.833 00:35:30.833 real 0m17.239s 00:35:30.833 user 0m30.547s 00:35:30.833 sys 0m2.097s 00:35:30.833 19:28:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:30.833 19:28:46 -- common/autotest_common.sh@10 -- # set +x 00:35:30.833 19:28:46 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:35:30.833 19:28:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:35:30.833 19:28:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:30.833 19:28:46 -- common/autotest_common.sh@10 -- # set +x 00:35:30.833 ************************************ 00:35:30.833 START TEST raid_superblock_test 00:35:30.833 ************************************ 00:35:30.833 19:28:46 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 4 00:35:30.833 19:28:46 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:35:30.833 19:28:46 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@357 -- # raid_pid=130470 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:30.834 19:28:46 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130470 /var/tmp/spdk-raid.sock 00:35:30.834 19:28:46 -- common/autotest_common.sh@817 -- # '[' -z 130470 ']' 00:35:30.834 19:28:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:30.834 19:28:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:30.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:30.834 19:28:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:30.834 19:28:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:30.834 19:28:46 -- common/autotest_common.sh@10 -- # set +x 00:35:30.834 [2024-04-18 19:28:46.663090] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:35:30.834 [2024-04-18 19:28:46.663245] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130470 ] 00:35:31.093 [2024-04-18 19:28:46.826403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.351 [2024-04-18 19:28:47.054135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.351 [2024-04-18 19:28:47.262958] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:31.918 19:28:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:31.918 19:28:47 -- common/autotest_common.sh@850 -- # return 0 00:35:31.918 19:28:47 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:35:31.918 19:28:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:31.918 19:28:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:35:31.918 19:28:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:35:31.918 19:28:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:31.918 19:28:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:31.918 19:28:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:35:31.918 19:28:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:31.918 19:28:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:35:32.177 malloc1 00:35:32.177 19:28:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:32.435 [2024-04-18 19:28:48.133728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:32.435 [2024-04-18 19:28:48.134019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:32.435 [2024-04-18 19:28:48.134085] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:35:32.435 [2024-04-18 19:28:48.134216] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:32.435 [2024-04-18 19:28:48.136756] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:32.435 [2024-04-18 19:28:48.136908] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:32.435 pt1 00:35:32.435 19:28:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:35:32.435 19:28:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:32.435 19:28:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:35:32.435 19:28:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:35:32.435 19:28:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:32.435 19:28:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:32.435 19:28:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:35:32.435 19:28:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:32.435 19:28:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:35:32.701 malloc2 00:35:32.701 19:28:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:32.959 [2024-04-18 19:28:48.680973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:32.959 [2024-04-18 19:28:48.681218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:32.959 [2024-04-18 19:28:48.681364] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:32.959 [2024-04-18 19:28:48.681489] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:32.959 [2024-04-18 19:28:48.683965] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:32.959 [2024-04-18 19:28:48.684129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:32.959 pt2 00:35:32.959 19:28:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:35:32.959 19:28:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:32.960 19:28:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:35:32.960 19:28:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:35:32.960 19:28:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:35:32.960 19:28:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:32.960 19:28:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:35:32.960 19:28:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:32.960 19:28:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:35:33.218 malloc3 00:35:33.218 19:28:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:33.218 [2024-04-18 19:28:49.134087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:33.218 [2024-04-18 19:28:49.134328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:33.218 [2024-04-18 19:28:49.134398] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:33.218 [2024-04-18 19:28:49.134521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:33.218 [2024-04-18 19:28:49.136992] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:33.218 [2024-04-18 19:28:49.137156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:33.218 pt3 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:35:33.477 malloc4 00:35:33.477 19:28:49 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:33.735 [2024-04-18 19:28:49.659509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:33.735 [2024-04-18 19:28:49.659769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:33.735 [2024-04-18 19:28:49.659903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:35:33.735 [2024-04-18 19:28:49.660030] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:33.735 [2024-04-18 19:28:49.662592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:33.993 [2024-04-18 19:28:49.662760] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:33.993 pt4 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:35:33.993 [2024-04-18 19:28:49.879948] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:33.993 [2024-04-18 19:28:49.882264] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:33.993 [2024-04-18 19:28:49.882443] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:33.993 [2024-04-18 19:28:49.882573] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:33.993 [2024-04-18 19:28:49.882885] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:35:33.993 [2024-04-18 19:28:49.882994] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:33.993 [2024-04-18 19:28:49.883169] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:35:33.993 [2024-04-18 19:28:49.883590] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:35:33.993 [2024-04-18 19:28:49.883632] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:35:33.993 [2024-04-18 19:28:49.884075] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.993 19:28:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:34.251 19:28:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:34.251 "name": "raid_bdev1", 00:35:34.251 "uuid": "3a4f32a8-b0e1-49c8-8b3e-17dc4b4f9a7c", 00:35:34.251 "strip_size_kb": 64, 00:35:34.251 "state": "online", 00:35:34.251 "raid_level": "concat", 00:35:34.251 "superblock": true, 00:35:34.251 "num_base_bdevs": 4, 00:35:34.251 "num_base_bdevs_discovered": 4, 00:35:34.251 "num_base_bdevs_operational": 4, 00:35:34.251 "base_bdevs_list": [ 00:35:34.251 { 00:35:34.251 "name": "pt1", 00:35:34.251 "uuid": "5859192d-fc0f-58d7-a483-9708566bafce", 00:35:34.251 "is_configured": true, 00:35:34.251 "data_offset": 2048, 00:35:34.251 "data_size": 63488 00:35:34.251 }, 00:35:34.251 { 00:35:34.251 "name": "pt2", 00:35:34.251 "uuid": "fa48bf52-1cd5-5a27-a72d-8cb7c4715028", 00:35:34.251 "is_configured": true, 00:35:34.251 "data_offset": 2048, 00:35:34.251 "data_size": 63488 00:35:34.251 }, 00:35:34.251 { 00:35:34.251 "name": "pt3", 00:35:34.251 "uuid": "d625b098-6818-5137-86be-20547f849417", 00:35:34.251 "is_configured": true, 00:35:34.251 "data_offset": 2048, 00:35:34.251 "data_size": 63488 00:35:34.251 }, 00:35:34.251 { 00:35:34.251 "name": "pt4", 00:35:34.251 "uuid": "ff6fbac4-3ef0-564d-a0f8-3aa07928e439", 00:35:34.251 "is_configured": true, 00:35:34.251 "data_offset": 2048, 00:35:34.251 "data_size": 63488 00:35:34.251 } 00:35:34.251 ] 00:35:34.251 }' 00:35:34.251 19:28:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:34.251 19:28:50 -- common/autotest_common.sh@10 -- # set +x 00:35:35.256 19:28:50 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:35.256 19:28:50 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:35:35.256 [2024-04-18 19:28:51.092572] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:35.256 19:28:51 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=3a4f32a8-b0e1-49c8-8b3e-17dc4b4f9a7c 00:35:35.256 19:28:51 -- bdev/bdev_raid.sh@380 -- # '[' -z 3a4f32a8-b0e1-49c8-8b3e-17dc4b4f9a7c ']' 00:35:35.256 19:28:51 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:35.514 [2024-04-18 19:28:51.356391] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:35.514 [2024-04-18 19:28:51.356582] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:35.514 [2024-04-18 19:28:51.356764] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:35.514 [2024-04-18 19:28:51.356986] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:35.514 [2024-04-18 19:28:51.357095] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:35:35.514 19:28:51 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.514 19:28:51 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:35:35.772 19:28:51 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:35:35.772 19:28:51 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:35:35.772 19:28:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:35:35.772 19:28:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:36.029 19:28:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:35:36.029 19:28:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:36.287 19:28:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:35:36.287 19:28:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:36.545 19:28:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:35:36.545 19:28:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:35:36.803 19:28:52 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:36.803 19:28:52 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:37.061 19:28:52 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:35:37.061 19:28:52 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:35:37.061 19:28:52 -- common/autotest_common.sh@638 -- # local es=0 00:35:37.061 19:28:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:35:37.061 19:28:52 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:37.061 19:28:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.061 19:28:52 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:37.061 19:28:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.061 19:28:52 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:37.061 19:28:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.061 19:28:52 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:37.061 19:28:52 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:37.061 19:28:52 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:35:37.319 [2024-04-18 19:28:52.992850] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:37.319 [2024-04-18 19:28:52.995060] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:37.319 [2024-04-18 19:28:52.995115] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:35:37.319 [2024-04-18 19:28:52.995154] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:35:37.319 [2024-04-18 19:28:52.995201] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:35:37.319 [2024-04-18 19:28:52.995268] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:35:37.319 [2024-04-18 19:28:52.995298] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:35:37.319 [2024-04-18 19:28:52.995356] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:35:37.319 [2024-04-18 19:28:52.995395] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:37.319 [2024-04-18 19:28:52.995405] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:35:37.319 request: 00:35:37.319 { 00:35:37.319 "name": "raid_bdev1", 00:35:37.319 "raid_level": "concat", 00:35:37.319 "base_bdevs": [ 00:35:37.319 "malloc1", 00:35:37.319 "malloc2", 00:35:37.319 "malloc3", 00:35:37.319 "malloc4" 00:35:37.319 ], 00:35:37.319 "superblock": false, 00:35:37.319 "strip_size_kb": 64, 00:35:37.319 "method": "bdev_raid_create", 00:35:37.319 "req_id": 1 00:35:37.319 } 00:35:37.319 Got JSON-RPC error response 00:35:37.319 response: 00:35:37.319 { 00:35:37.319 "code": -17, 00:35:37.319 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:37.319 } 00:35:37.319 19:28:53 -- common/autotest_common.sh@641 -- # es=1 00:35:37.319 19:28:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:37.319 19:28:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:37.319 19:28:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:37.319 19:28:53 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:35:37.319 19:28:53 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:37.577 [2024-04-18 19:28:53.476902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:37.577 [2024-04-18 19:28:53.477035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:37.577 [2024-04-18 19:28:53.477091] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:37.577 [2024-04-18 19:28:53.477125] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:37.577 [2024-04-18 19:28:53.479861] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:37.577 [2024-04-18 19:28:53.479941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:37.577 [2024-04-18 19:28:53.480097] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:35:37.577 [2024-04-18 19:28:53.480155] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:37.577 pt1 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:37.577 19:28:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:37.838 19:28:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:37.838 "name": "raid_bdev1", 00:35:37.838 "uuid": "3a4f32a8-b0e1-49c8-8b3e-17dc4b4f9a7c", 00:35:37.838 "strip_size_kb": 64, 00:35:37.838 "state": "configuring", 00:35:37.838 "raid_level": "concat", 00:35:37.838 "superblock": true, 00:35:37.838 "num_base_bdevs": 4, 00:35:37.838 "num_base_bdevs_discovered": 1, 00:35:37.838 "num_base_bdevs_operational": 4, 00:35:37.838 "base_bdevs_list": [ 00:35:37.838 { 00:35:37.838 "name": "pt1", 00:35:37.838 "uuid": "5859192d-fc0f-58d7-a483-9708566bafce", 00:35:37.838 "is_configured": true, 00:35:37.838 "data_offset": 2048, 00:35:37.838 "data_size": 63488 00:35:37.838 }, 00:35:37.838 { 00:35:37.838 "name": null, 00:35:37.838 "uuid": "fa48bf52-1cd5-5a27-a72d-8cb7c4715028", 00:35:37.838 "is_configured": false, 00:35:37.838 "data_offset": 2048, 00:35:37.838 "data_size": 63488 00:35:37.838 }, 00:35:37.838 { 00:35:37.838 "name": null, 00:35:37.838 "uuid": "d625b098-6818-5137-86be-20547f849417", 00:35:37.838 "is_configured": false, 00:35:37.838 "data_offset": 2048, 00:35:37.838 "data_size": 63488 00:35:37.838 }, 00:35:37.838 { 00:35:37.838 "name": null, 00:35:37.838 "uuid": "ff6fbac4-3ef0-564d-a0f8-3aa07928e439", 00:35:37.838 "is_configured": false, 00:35:37.838 "data_offset": 2048, 00:35:37.838 "data_size": 63488 00:35:37.838 } 00:35:37.838 ] 00:35:37.838 }' 00:35:37.838 19:28:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:37.838 19:28:53 -- common/autotest_common.sh@10 -- # set +x 00:35:38.772 19:28:54 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:35:38.772 19:28:54 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:39.030 [2024-04-18 19:28:54.753176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:39.030 [2024-04-18 19:28:54.753266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:39.030 [2024-04-18 19:28:54.753307] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:35:39.030 [2024-04-18 19:28:54.753329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:39.030 [2024-04-18 19:28:54.753801] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:39.030 [2024-04-18 19:28:54.753852] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:39.030 [2024-04-18 19:28:54.753973] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:35:39.030 [2024-04-18 19:28:54.753997] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:39.030 pt2 00:35:39.030 19:28:54 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:39.287 [2024-04-18 19:28:55.025270] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:39.287 19:28:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.545 19:28:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:39.545 "name": "raid_bdev1", 00:35:39.545 "uuid": "3a4f32a8-b0e1-49c8-8b3e-17dc4b4f9a7c", 00:35:39.545 "strip_size_kb": 64, 00:35:39.545 "state": "configuring", 00:35:39.545 "raid_level": "concat", 00:35:39.545 "superblock": true, 00:35:39.545 "num_base_bdevs": 4, 00:35:39.545 "num_base_bdevs_discovered": 1, 00:35:39.545 "num_base_bdevs_operational": 4, 00:35:39.545 "base_bdevs_list": [ 00:35:39.545 { 00:35:39.545 "name": "pt1", 00:35:39.545 "uuid": "5859192d-fc0f-58d7-a483-9708566bafce", 00:35:39.545 "is_configured": true, 00:35:39.545 "data_offset": 2048, 00:35:39.545 "data_size": 63488 00:35:39.545 }, 00:35:39.545 { 00:35:39.545 "name": null, 00:35:39.545 "uuid": "fa48bf52-1cd5-5a27-a72d-8cb7c4715028", 00:35:39.545 "is_configured": false, 00:35:39.545 "data_offset": 2048, 00:35:39.545 "data_size": 63488 00:35:39.545 }, 00:35:39.545 { 00:35:39.545 "name": null, 00:35:39.545 "uuid": "d625b098-6818-5137-86be-20547f849417", 00:35:39.545 "is_configured": false, 00:35:39.545 "data_offset": 2048, 00:35:39.545 "data_size": 63488 00:35:39.545 }, 00:35:39.545 { 00:35:39.545 "name": null, 00:35:39.545 "uuid": "ff6fbac4-3ef0-564d-a0f8-3aa07928e439", 00:35:39.545 "is_configured": false, 00:35:39.545 "data_offset": 2048, 00:35:39.545 "data_size": 63488 00:35:39.545 } 00:35:39.545 ] 00:35:39.545 }' 00:35:39.545 19:28:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:39.545 19:28:55 -- common/autotest_common.sh@10 -- # set +x 00:35:40.112 19:28:55 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:35:40.112 19:28:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:35:40.112 19:28:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:40.370 [2024-04-18 19:28:56.197599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:40.370 [2024-04-18 19:28:56.197689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:40.370 [2024-04-18 19:28:56.197728] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:40.370 [2024-04-18 19:28:56.197750] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:40.370 [2024-04-18 19:28:56.198208] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:40.370 [2024-04-18 19:28:56.198264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:40.370 [2024-04-18 19:28:56.198363] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:35:40.370 [2024-04-18 19:28:56.198386] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:40.370 pt2 00:35:40.370 19:28:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:35:40.370 19:28:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:35:40.370 19:28:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:40.628 [2024-04-18 19:28:56.477661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:40.628 [2024-04-18 19:28:56.477748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:40.628 [2024-04-18 19:28:56.477779] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:35:40.628 [2024-04-18 19:28:56.477805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:40.628 [2024-04-18 19:28:56.478283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:40.628 [2024-04-18 19:28:56.478351] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:40.628 [2024-04-18 19:28:56.478442] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:35:40.628 [2024-04-18 19:28:56.478466] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:40.628 pt3 00:35:40.628 19:28:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:35:40.628 19:28:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:35:40.628 19:28:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:40.886 [2024-04-18 19:28:56.761745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:40.886 [2024-04-18 19:28:56.761853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:40.886 [2024-04-18 19:28:56.761895] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:35:40.886 [2024-04-18 19:28:56.761925] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:40.886 [2024-04-18 19:28:56.762420] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:40.886 [2024-04-18 19:28:56.762482] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:40.886 [2024-04-18 19:28:56.762598] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:35:40.886 [2024-04-18 19:28:56.762625] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:40.886 [2024-04-18 19:28:56.762751] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:35:40.886 [2024-04-18 19:28:56.762762] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:40.886 [2024-04-18 19:28:56.762862] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:40.886 [2024-04-18 19:28:56.763196] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:35:40.886 [2024-04-18 19:28:56.763218] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:35:40.886 [2024-04-18 19:28:56.763377] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:40.886 pt4 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:40.886 19:28:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.144 19:28:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:41.144 "name": "raid_bdev1", 00:35:41.144 "uuid": "3a4f32a8-b0e1-49c8-8b3e-17dc4b4f9a7c", 00:35:41.144 "strip_size_kb": 64, 00:35:41.144 "state": "online", 00:35:41.144 "raid_level": "concat", 00:35:41.144 "superblock": true, 00:35:41.144 "num_base_bdevs": 4, 00:35:41.144 "num_base_bdevs_discovered": 4, 00:35:41.144 "num_base_bdevs_operational": 4, 00:35:41.144 "base_bdevs_list": [ 00:35:41.144 { 00:35:41.144 "name": "pt1", 00:35:41.144 "uuid": "5859192d-fc0f-58d7-a483-9708566bafce", 00:35:41.144 "is_configured": true, 00:35:41.144 "data_offset": 2048, 00:35:41.144 "data_size": 63488 00:35:41.144 }, 00:35:41.144 { 00:35:41.144 "name": "pt2", 00:35:41.144 "uuid": "fa48bf52-1cd5-5a27-a72d-8cb7c4715028", 00:35:41.144 "is_configured": true, 00:35:41.144 "data_offset": 2048, 00:35:41.144 "data_size": 63488 00:35:41.144 }, 00:35:41.144 { 00:35:41.144 "name": "pt3", 00:35:41.144 "uuid": "d625b098-6818-5137-86be-20547f849417", 00:35:41.144 "is_configured": true, 00:35:41.144 "data_offset": 2048, 00:35:41.144 "data_size": 63488 00:35:41.144 }, 00:35:41.144 { 00:35:41.144 "name": "pt4", 00:35:41.144 "uuid": "ff6fbac4-3ef0-564d-a0f8-3aa07928e439", 00:35:41.144 "is_configured": true, 00:35:41.144 "data_offset": 2048, 00:35:41.144 "data_size": 63488 00:35:41.144 } 00:35:41.144 ] 00:35:41.144 }' 00:35:41.144 19:28:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:41.144 19:28:57 -- common/autotest_common.sh@10 -- # set +x 00:35:42.079 19:28:57 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:42.079 19:28:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:35:42.079 [2024-04-18 19:28:57.994209] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:42.337 19:28:58 -- bdev/bdev_raid.sh@430 -- # '[' 3a4f32a8-b0e1-49c8-8b3e-17dc4b4f9a7c '!=' 3a4f32a8-b0e1-49c8-8b3e-17dc4b4f9a7c ']' 00:35:42.337 19:28:58 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:35:42.337 19:28:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:42.337 19:28:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:35:42.337 19:28:58 -- bdev/bdev_raid.sh@511 -- # killprocess 130470 00:35:42.337 19:28:58 -- common/autotest_common.sh@936 -- # '[' -z 130470 ']' 00:35:42.337 19:28:58 -- common/autotest_common.sh@940 -- # kill -0 130470 00:35:42.337 19:28:58 -- common/autotest_common.sh@941 -- # uname 00:35:42.337 19:28:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:42.337 19:28:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130470 00:35:42.337 killing process with pid 130470 00:35:42.337 19:28:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:42.337 19:28:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:42.337 19:28:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130470' 00:35:42.337 19:28:58 -- common/autotest_common.sh@955 -- # kill 130470 00:35:42.337 19:28:58 -- common/autotest_common.sh@960 -- # wait 130470 00:35:42.337 [2024-04-18 19:28:58.037575] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:42.337 [2024-04-18 19:28:58.037651] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:42.338 [2024-04-18 19:28:58.037727] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:42.338 [2024-04-18 19:28:58.037736] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:35:42.596 [2024-04-18 19:28:58.407516] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:44.038 ************************************ 00:35:44.038 END TEST raid_superblock_test 00:35:44.038 ************************************ 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:35:44.038 00:35:44.038 real 0m13.201s 00:35:44.038 user 0m22.647s 00:35:44.038 sys 0m1.709s 00:35:44.038 19:28:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:44.038 19:28:59 -- common/autotest_common.sh@10 -- # set +x 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:35:44.038 19:28:59 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:35:44.038 19:28:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:44.038 19:28:59 -- common/autotest_common.sh@10 -- # set +x 00:35:44.038 ************************************ 00:35:44.038 START TEST raid_state_function_test 00:35:44.038 ************************************ 00:35:44.038 19:28:59 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 4 false 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=130833 00:35:44.038 Process raid pid: 130833 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130833' 00:35:44.038 19:28:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130833 /var/tmp/spdk-raid.sock 00:35:44.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:44.038 19:28:59 -- common/autotest_common.sh@817 -- # '[' -z 130833 ']' 00:35:44.038 19:28:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:44.038 19:28:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:44.038 19:28:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:44.038 19:28:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:44.038 19:28:59 -- common/autotest_common.sh@10 -- # set +x 00:35:44.038 [2024-04-18 19:28:59.963653] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:35:44.038 [2024-04-18 19:28:59.963835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.295 [2024-04-18 19:29:00.136334] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.553 [2024-04-18 19:29:00.332352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.811 [2024-04-18 19:29:00.545982] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:45.122 19:29:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:45.122 19:29:00 -- common/autotest_common.sh@850 -- # return 0 00:35:45.122 19:29:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:45.381 [2024-04-18 19:29:01.227830] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:45.381 [2024-04-18 19:29:01.227925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:45.381 [2024-04-18 19:29:01.227941] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:45.381 [2024-04-18 19:29:01.227986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:45.381 [2024-04-18 19:29:01.227996] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:45.381 [2024-04-18 19:29:01.228048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:45.381 [2024-04-18 19:29:01.228059] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:45.381 [2024-04-18 19:29:01.228090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:45.381 19:29:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:35:45.381 19:29:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:45.382 19:29:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:45.640 19:29:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:45.640 "name": "Existed_Raid", 00:35:45.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.640 "strip_size_kb": 0, 00:35:45.640 "state": "configuring", 00:35:45.640 "raid_level": "raid1", 00:35:45.641 "superblock": false, 00:35:45.641 "num_base_bdevs": 4, 00:35:45.641 "num_base_bdevs_discovered": 0, 00:35:45.641 "num_base_bdevs_operational": 4, 00:35:45.641 "base_bdevs_list": [ 00:35:45.641 { 00:35:45.641 "name": "BaseBdev1", 00:35:45.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.641 "is_configured": false, 00:35:45.641 "data_offset": 0, 00:35:45.641 "data_size": 0 00:35:45.641 }, 00:35:45.641 { 00:35:45.641 "name": "BaseBdev2", 00:35:45.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.641 "is_configured": false, 00:35:45.641 "data_offset": 0, 00:35:45.641 "data_size": 0 00:35:45.641 }, 00:35:45.641 { 00:35:45.641 "name": "BaseBdev3", 00:35:45.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.641 "is_configured": false, 00:35:45.641 "data_offset": 0, 00:35:45.641 "data_size": 0 00:35:45.641 }, 00:35:45.641 { 00:35:45.641 "name": "BaseBdev4", 00:35:45.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.641 "is_configured": false, 00:35:45.641 "data_offset": 0, 00:35:45.641 "data_size": 0 00:35:45.641 } 00:35:45.641 ] 00:35:45.641 }' 00:35:45.641 19:29:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:45.641 19:29:01 -- common/autotest_common.sh@10 -- # set +x 00:35:46.206 19:29:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:46.464 [2024-04-18 19:29:02.375880] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:46.464 [2024-04-18 19:29:02.375919] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:35:46.722 19:29:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:46.722 [2024-04-18 19:29:02.647960] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:46.722 [2024-04-18 19:29:02.648058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:46.722 [2024-04-18 19:29:02.648070] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:46.722 [2024-04-18 19:29:02.648096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:46.722 [2024-04-18 19:29:02.648104] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:46.723 [2024-04-18 19:29:02.648140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:46.723 [2024-04-18 19:29:02.648147] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:46.723 [2024-04-18 19:29:02.648171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:46.981 19:29:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:47.239 [2024-04-18 19:29:02.950124] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:47.239 BaseBdev1 00:35:47.240 19:29:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:35:47.240 19:29:02 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:35:47.240 19:29:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:47.240 19:29:02 -- common/autotest_common.sh@887 -- # local i 00:35:47.240 19:29:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:47.240 19:29:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:47.240 19:29:02 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:47.498 19:29:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:47.757 [ 00:35:47.757 { 00:35:47.757 "name": "BaseBdev1", 00:35:47.757 "aliases": [ 00:35:47.757 "babf72d5-73de-4f60-93f4-a63007793443" 00:35:47.757 ], 00:35:47.757 "product_name": "Malloc disk", 00:35:47.757 "block_size": 512, 00:35:47.757 "num_blocks": 65536, 00:35:47.757 "uuid": "babf72d5-73de-4f60-93f4-a63007793443", 00:35:47.757 "assigned_rate_limits": { 00:35:47.757 "rw_ios_per_sec": 0, 00:35:47.757 "rw_mbytes_per_sec": 0, 00:35:47.757 "r_mbytes_per_sec": 0, 00:35:47.757 "w_mbytes_per_sec": 0 00:35:47.757 }, 00:35:47.757 "claimed": true, 00:35:47.757 "claim_type": "exclusive_write", 00:35:47.757 "zoned": false, 00:35:47.757 "supported_io_types": { 00:35:47.757 "read": true, 00:35:47.757 "write": true, 00:35:47.757 "unmap": true, 00:35:47.757 "write_zeroes": true, 00:35:47.757 "flush": true, 00:35:47.757 "reset": true, 00:35:47.757 "compare": false, 00:35:47.757 "compare_and_write": false, 00:35:47.757 "abort": true, 00:35:47.757 "nvme_admin": false, 00:35:47.757 "nvme_io": false 00:35:47.757 }, 00:35:47.757 "memory_domains": [ 00:35:47.757 { 00:35:47.757 "dma_device_id": "system", 00:35:47.757 "dma_device_type": 1 00:35:47.757 }, 00:35:47.757 { 00:35:47.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.757 "dma_device_type": 2 00:35:47.757 } 00:35:47.757 ], 00:35:47.757 "driver_specific": {} 00:35:47.757 } 00:35:47.757 ] 00:35:47.757 19:29:03 -- common/autotest_common.sh@893 -- # return 0 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:47.757 19:29:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:48.016 19:29:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:48.016 "name": "Existed_Raid", 00:35:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.016 "strip_size_kb": 0, 00:35:48.016 "state": "configuring", 00:35:48.016 "raid_level": "raid1", 00:35:48.016 "superblock": false, 00:35:48.016 "num_base_bdevs": 4, 00:35:48.016 "num_base_bdevs_discovered": 1, 00:35:48.016 "num_base_bdevs_operational": 4, 00:35:48.016 "base_bdevs_list": [ 00:35:48.016 { 00:35:48.016 "name": "BaseBdev1", 00:35:48.016 "uuid": "babf72d5-73de-4f60-93f4-a63007793443", 00:35:48.016 "is_configured": true, 00:35:48.016 "data_offset": 0, 00:35:48.016 "data_size": 65536 00:35:48.016 }, 00:35:48.016 { 00:35:48.016 "name": "BaseBdev2", 00:35:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.016 "is_configured": false, 00:35:48.016 "data_offset": 0, 00:35:48.016 "data_size": 0 00:35:48.016 }, 00:35:48.016 { 00:35:48.016 "name": "BaseBdev3", 00:35:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.016 "is_configured": false, 00:35:48.016 "data_offset": 0, 00:35:48.016 "data_size": 0 00:35:48.016 }, 00:35:48.016 { 00:35:48.016 "name": "BaseBdev4", 00:35:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.016 "is_configured": false, 00:35:48.016 "data_offset": 0, 00:35:48.016 "data_size": 0 00:35:48.017 } 00:35:48.017 ] 00:35:48.017 }' 00:35:48.017 19:29:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:48.017 19:29:03 -- common/autotest_common.sh@10 -- # set +x 00:35:48.583 19:29:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:48.841 [2024-04-18 19:29:04.730634] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:48.841 [2024-04-18 19:29:04.730692] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:35:48.841 19:29:04 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:35:48.841 19:29:04 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:49.408 [2024-04-18 19:29:05.030738] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:49.408 [2024-04-18 19:29:05.032881] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:49.408 [2024-04-18 19:29:05.032963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:49.408 [2024-04-18 19:29:05.032974] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:49.408 [2024-04-18 19:29:05.033016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:49.408 [2024-04-18 19:29:05.033038] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:49.408 [2024-04-18 19:29:05.033055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.408 19:29:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:49.667 19:29:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:49.667 "name": "Existed_Raid", 00:35:49.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.667 "strip_size_kb": 0, 00:35:49.667 "state": "configuring", 00:35:49.667 "raid_level": "raid1", 00:35:49.667 "superblock": false, 00:35:49.667 "num_base_bdevs": 4, 00:35:49.667 "num_base_bdevs_discovered": 1, 00:35:49.667 "num_base_bdevs_operational": 4, 00:35:49.667 "base_bdevs_list": [ 00:35:49.667 { 00:35:49.667 "name": "BaseBdev1", 00:35:49.667 "uuid": "babf72d5-73de-4f60-93f4-a63007793443", 00:35:49.667 "is_configured": true, 00:35:49.667 "data_offset": 0, 00:35:49.667 "data_size": 65536 00:35:49.667 }, 00:35:49.667 { 00:35:49.667 "name": "BaseBdev2", 00:35:49.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.667 "is_configured": false, 00:35:49.667 "data_offset": 0, 00:35:49.667 "data_size": 0 00:35:49.667 }, 00:35:49.667 { 00:35:49.667 "name": "BaseBdev3", 00:35:49.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.667 "is_configured": false, 00:35:49.667 "data_offset": 0, 00:35:49.667 "data_size": 0 00:35:49.667 }, 00:35:49.667 { 00:35:49.667 "name": "BaseBdev4", 00:35:49.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.667 "is_configured": false, 00:35:49.667 "data_offset": 0, 00:35:49.667 "data_size": 0 00:35:49.667 } 00:35:49.667 ] 00:35:49.667 }' 00:35:49.667 19:29:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:49.667 19:29:05 -- common/autotest_common.sh@10 -- # set +x 00:35:50.234 19:29:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:50.493 [2024-04-18 19:29:06.367271] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:50.493 BaseBdev2 00:35:50.493 19:29:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:35:50.493 19:29:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:35:50.493 19:29:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:50.493 19:29:06 -- common/autotest_common.sh@887 -- # local i 00:35:50.493 19:29:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:50.493 19:29:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:50.493 19:29:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:50.751 19:29:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:51.318 [ 00:35:51.318 { 00:35:51.318 "name": "BaseBdev2", 00:35:51.318 "aliases": [ 00:35:51.318 "36a715b1-ac7a-4b70-b15b-2b21cd06bb29" 00:35:51.318 ], 00:35:51.318 "product_name": "Malloc disk", 00:35:51.318 "block_size": 512, 00:35:51.318 "num_blocks": 65536, 00:35:51.318 "uuid": "36a715b1-ac7a-4b70-b15b-2b21cd06bb29", 00:35:51.318 "assigned_rate_limits": { 00:35:51.318 "rw_ios_per_sec": 0, 00:35:51.318 "rw_mbytes_per_sec": 0, 00:35:51.318 "r_mbytes_per_sec": 0, 00:35:51.318 "w_mbytes_per_sec": 0 00:35:51.318 }, 00:35:51.318 "claimed": true, 00:35:51.318 "claim_type": "exclusive_write", 00:35:51.318 "zoned": false, 00:35:51.318 "supported_io_types": { 00:35:51.318 "read": true, 00:35:51.318 "write": true, 00:35:51.318 "unmap": true, 00:35:51.318 "write_zeroes": true, 00:35:51.318 "flush": true, 00:35:51.318 "reset": true, 00:35:51.318 "compare": false, 00:35:51.318 "compare_and_write": false, 00:35:51.318 "abort": true, 00:35:51.318 "nvme_admin": false, 00:35:51.318 "nvme_io": false 00:35:51.318 }, 00:35:51.318 "memory_domains": [ 00:35:51.318 { 00:35:51.318 "dma_device_id": "system", 00:35:51.318 "dma_device_type": 1 00:35:51.318 }, 00:35:51.318 { 00:35:51.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:51.318 "dma_device_type": 2 00:35:51.318 } 00:35:51.318 ], 00:35:51.318 "driver_specific": {} 00:35:51.318 } 00:35:51.318 ] 00:35:51.318 19:29:06 -- common/autotest_common.sh@893 -- # return 0 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:51.318 19:29:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:51.318 19:29:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:51.318 "name": "Existed_Raid", 00:35:51.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.318 "strip_size_kb": 0, 00:35:51.318 "state": "configuring", 00:35:51.318 "raid_level": "raid1", 00:35:51.318 "superblock": false, 00:35:51.318 "num_base_bdevs": 4, 00:35:51.318 "num_base_bdevs_discovered": 2, 00:35:51.318 "num_base_bdevs_operational": 4, 00:35:51.318 "base_bdevs_list": [ 00:35:51.318 { 00:35:51.318 "name": "BaseBdev1", 00:35:51.318 "uuid": "babf72d5-73de-4f60-93f4-a63007793443", 00:35:51.318 "is_configured": true, 00:35:51.318 "data_offset": 0, 00:35:51.318 "data_size": 65536 00:35:51.318 }, 00:35:51.318 { 00:35:51.318 "name": "BaseBdev2", 00:35:51.318 "uuid": "36a715b1-ac7a-4b70-b15b-2b21cd06bb29", 00:35:51.318 "is_configured": true, 00:35:51.318 "data_offset": 0, 00:35:51.318 "data_size": 65536 00:35:51.318 }, 00:35:51.318 { 00:35:51.318 "name": "BaseBdev3", 00:35:51.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.318 "is_configured": false, 00:35:51.318 "data_offset": 0, 00:35:51.318 "data_size": 0 00:35:51.318 }, 00:35:51.318 { 00:35:51.318 "name": "BaseBdev4", 00:35:51.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.318 "is_configured": false, 00:35:51.318 "data_offset": 0, 00:35:51.318 "data_size": 0 00:35:51.318 } 00:35:51.318 ] 00:35:51.318 }' 00:35:51.318 19:29:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:51.318 19:29:07 -- common/autotest_common.sh@10 -- # set +x 00:35:52.253 19:29:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:35:52.253 [2024-04-18 19:29:08.165009] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:52.253 BaseBdev3 00:35:52.253 19:29:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:35:52.253 19:29:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:35:52.253 19:29:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:52.253 19:29:08 -- common/autotest_common.sh@887 -- # local i 00:35:52.253 19:29:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:52.253 19:29:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:52.253 19:29:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:52.511 19:29:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:52.769 [ 00:35:52.769 { 00:35:52.769 "name": "BaseBdev3", 00:35:52.769 "aliases": [ 00:35:52.769 "00d6aa7d-35ed-46a8-b688-dc23923a5561" 00:35:52.769 ], 00:35:52.769 "product_name": "Malloc disk", 00:35:52.769 "block_size": 512, 00:35:52.769 "num_blocks": 65536, 00:35:52.769 "uuid": "00d6aa7d-35ed-46a8-b688-dc23923a5561", 00:35:52.769 "assigned_rate_limits": { 00:35:52.769 "rw_ios_per_sec": 0, 00:35:52.769 "rw_mbytes_per_sec": 0, 00:35:52.769 "r_mbytes_per_sec": 0, 00:35:52.769 "w_mbytes_per_sec": 0 00:35:52.769 }, 00:35:52.769 "claimed": true, 00:35:52.769 "claim_type": "exclusive_write", 00:35:52.769 "zoned": false, 00:35:52.769 "supported_io_types": { 00:35:52.769 "read": true, 00:35:52.769 "write": true, 00:35:52.769 "unmap": true, 00:35:52.769 "write_zeroes": true, 00:35:52.769 "flush": true, 00:35:52.769 "reset": true, 00:35:52.769 "compare": false, 00:35:52.769 "compare_and_write": false, 00:35:52.769 "abort": true, 00:35:52.769 "nvme_admin": false, 00:35:52.769 "nvme_io": false 00:35:52.769 }, 00:35:52.769 "memory_domains": [ 00:35:52.769 { 00:35:52.769 "dma_device_id": "system", 00:35:52.769 "dma_device_type": 1 00:35:52.769 }, 00:35:52.769 { 00:35:52.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:52.769 "dma_device_type": 2 00:35:52.769 } 00:35:52.769 ], 00:35:52.769 "driver_specific": {} 00:35:52.769 } 00:35:52.769 ] 00:35:52.769 19:29:08 -- common/autotest_common.sh@893 -- # return 0 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:52.770 19:29:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:53.028 19:29:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:53.028 "name": "Existed_Raid", 00:35:53.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:53.028 "strip_size_kb": 0, 00:35:53.028 "state": "configuring", 00:35:53.028 "raid_level": "raid1", 00:35:53.028 "superblock": false, 00:35:53.028 "num_base_bdevs": 4, 00:35:53.028 "num_base_bdevs_discovered": 3, 00:35:53.028 "num_base_bdevs_operational": 4, 00:35:53.028 "base_bdevs_list": [ 00:35:53.028 { 00:35:53.028 "name": "BaseBdev1", 00:35:53.028 "uuid": "babf72d5-73de-4f60-93f4-a63007793443", 00:35:53.028 "is_configured": true, 00:35:53.028 "data_offset": 0, 00:35:53.028 "data_size": 65536 00:35:53.028 }, 00:35:53.028 { 00:35:53.028 "name": "BaseBdev2", 00:35:53.028 "uuid": "36a715b1-ac7a-4b70-b15b-2b21cd06bb29", 00:35:53.028 "is_configured": true, 00:35:53.028 "data_offset": 0, 00:35:53.028 "data_size": 65536 00:35:53.028 }, 00:35:53.028 { 00:35:53.028 "name": "BaseBdev3", 00:35:53.028 "uuid": "00d6aa7d-35ed-46a8-b688-dc23923a5561", 00:35:53.028 "is_configured": true, 00:35:53.028 "data_offset": 0, 00:35:53.028 "data_size": 65536 00:35:53.028 }, 00:35:53.028 { 00:35:53.028 "name": "BaseBdev4", 00:35:53.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:53.028 "is_configured": false, 00:35:53.028 "data_offset": 0, 00:35:53.028 "data_size": 0 00:35:53.028 } 00:35:53.028 ] 00:35:53.028 }' 00:35:53.028 19:29:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:53.028 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:35:53.966 19:29:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:35:53.966 [2024-04-18 19:29:09.885091] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:53.966 [2024-04-18 19:29:09.885154] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:35:53.966 [2024-04-18 19:29:09.885165] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:35:53.966 [2024-04-18 19:29:09.885336] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:35:53.966 [2024-04-18 19:29:09.885693] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:35:53.966 [2024-04-18 19:29:09.885731] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:35:53.966 [2024-04-18 19:29:09.885986] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:53.966 BaseBdev4 00:35:54.225 19:29:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:35:54.225 19:29:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:35:54.225 19:29:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:35:54.225 19:29:09 -- common/autotest_common.sh@887 -- # local i 00:35:54.225 19:29:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:35:54.225 19:29:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:35:54.225 19:29:09 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:54.484 19:29:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:54.743 [ 00:35:54.743 { 00:35:54.743 "name": "BaseBdev4", 00:35:54.743 "aliases": [ 00:35:54.743 "1780928c-c4ca-4c22-86a4-98922d754017" 00:35:54.743 ], 00:35:54.743 "product_name": "Malloc disk", 00:35:54.743 "block_size": 512, 00:35:54.743 "num_blocks": 65536, 00:35:54.743 "uuid": "1780928c-c4ca-4c22-86a4-98922d754017", 00:35:54.743 "assigned_rate_limits": { 00:35:54.743 "rw_ios_per_sec": 0, 00:35:54.743 "rw_mbytes_per_sec": 0, 00:35:54.743 "r_mbytes_per_sec": 0, 00:35:54.743 "w_mbytes_per_sec": 0 00:35:54.743 }, 00:35:54.743 "claimed": true, 00:35:54.743 "claim_type": "exclusive_write", 00:35:54.743 "zoned": false, 00:35:54.743 "supported_io_types": { 00:35:54.743 "read": true, 00:35:54.743 "write": true, 00:35:54.743 "unmap": true, 00:35:54.743 "write_zeroes": true, 00:35:54.743 "flush": true, 00:35:54.743 "reset": true, 00:35:54.743 "compare": false, 00:35:54.743 "compare_and_write": false, 00:35:54.743 "abort": true, 00:35:54.743 "nvme_admin": false, 00:35:54.743 "nvme_io": false 00:35:54.743 }, 00:35:54.743 "memory_domains": [ 00:35:54.743 { 00:35:54.743 "dma_device_id": "system", 00:35:54.743 "dma_device_type": 1 00:35:54.743 }, 00:35:54.743 { 00:35:54.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:54.743 "dma_device_type": 2 00:35:54.743 } 00:35:54.743 ], 00:35:54.743 "driver_specific": {} 00:35:54.743 } 00:35:54.743 ] 00:35:54.743 19:29:10 -- common/autotest_common.sh@893 -- # return 0 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:54.743 19:29:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:55.002 19:29:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:55.002 "name": "Existed_Raid", 00:35:55.002 "uuid": "8863aff7-a98e-4910-8b41-e44e392439b3", 00:35:55.002 "strip_size_kb": 0, 00:35:55.002 "state": "online", 00:35:55.002 "raid_level": "raid1", 00:35:55.002 "superblock": false, 00:35:55.002 "num_base_bdevs": 4, 00:35:55.002 "num_base_bdevs_discovered": 4, 00:35:55.002 "num_base_bdevs_operational": 4, 00:35:55.002 "base_bdevs_list": [ 00:35:55.002 { 00:35:55.002 "name": "BaseBdev1", 00:35:55.002 "uuid": "babf72d5-73de-4f60-93f4-a63007793443", 00:35:55.002 "is_configured": true, 00:35:55.002 "data_offset": 0, 00:35:55.002 "data_size": 65536 00:35:55.002 }, 00:35:55.002 { 00:35:55.002 "name": "BaseBdev2", 00:35:55.002 "uuid": "36a715b1-ac7a-4b70-b15b-2b21cd06bb29", 00:35:55.002 "is_configured": true, 00:35:55.002 "data_offset": 0, 00:35:55.002 "data_size": 65536 00:35:55.002 }, 00:35:55.002 { 00:35:55.002 "name": "BaseBdev3", 00:35:55.002 "uuid": "00d6aa7d-35ed-46a8-b688-dc23923a5561", 00:35:55.002 "is_configured": true, 00:35:55.002 "data_offset": 0, 00:35:55.002 "data_size": 65536 00:35:55.002 }, 00:35:55.002 { 00:35:55.002 "name": "BaseBdev4", 00:35:55.002 "uuid": "1780928c-c4ca-4c22-86a4-98922d754017", 00:35:55.002 "is_configured": true, 00:35:55.002 "data_offset": 0, 00:35:55.002 "data_size": 65536 00:35:55.002 } 00:35:55.002 ] 00:35:55.002 }' 00:35:55.002 19:29:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:55.002 19:29:10 -- common/autotest_common.sh@10 -- # set +x 00:35:55.570 19:29:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:55.828 [2024-04-18 19:29:11.734227] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@196 -- # return 0 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:56.087 19:29:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:56.345 19:29:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:56.345 "name": "Existed_Raid", 00:35:56.345 "uuid": "8863aff7-a98e-4910-8b41-e44e392439b3", 00:35:56.345 "strip_size_kb": 0, 00:35:56.345 "state": "online", 00:35:56.345 "raid_level": "raid1", 00:35:56.345 "superblock": false, 00:35:56.345 "num_base_bdevs": 4, 00:35:56.345 "num_base_bdevs_discovered": 3, 00:35:56.345 "num_base_bdevs_operational": 3, 00:35:56.345 "base_bdevs_list": [ 00:35:56.345 { 00:35:56.345 "name": null, 00:35:56.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:56.345 "is_configured": false, 00:35:56.345 "data_offset": 0, 00:35:56.345 "data_size": 65536 00:35:56.345 }, 00:35:56.345 { 00:35:56.345 "name": "BaseBdev2", 00:35:56.345 "uuid": "36a715b1-ac7a-4b70-b15b-2b21cd06bb29", 00:35:56.345 "is_configured": true, 00:35:56.345 "data_offset": 0, 00:35:56.345 "data_size": 65536 00:35:56.345 }, 00:35:56.345 { 00:35:56.345 "name": "BaseBdev3", 00:35:56.345 "uuid": "00d6aa7d-35ed-46a8-b688-dc23923a5561", 00:35:56.345 "is_configured": true, 00:35:56.345 "data_offset": 0, 00:35:56.345 "data_size": 65536 00:35:56.345 }, 00:35:56.345 { 00:35:56.345 "name": "BaseBdev4", 00:35:56.345 "uuid": "1780928c-c4ca-4c22-86a4-98922d754017", 00:35:56.345 "is_configured": true, 00:35:56.345 "data_offset": 0, 00:35:56.345 "data_size": 65536 00:35:56.345 } 00:35:56.345 ] 00:35:56.345 }' 00:35:56.345 19:29:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:56.345 19:29:12 -- common/autotest_common.sh@10 -- # set +x 00:35:57.279 19:29:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:35:57.279 19:29:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:57.279 19:29:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.279 19:29:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:57.279 19:29:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:57.279 19:29:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:57.279 19:29:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:57.538 [2024-04-18 19:29:13.393696] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:57.796 19:29:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:57.796 19:29:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:57.796 19:29:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:57.796 19:29:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.054 19:29:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:58.054 19:29:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:58.054 19:29:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:35:58.313 [2024-04-18 19:29:14.085954] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:58.313 19:29:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:58.313 19:29:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:58.313 19:29:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.313 19:29:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:58.880 19:29:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:58.880 19:29:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:58.880 19:29:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:35:58.880 [2024-04-18 19:29:14.759834] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:58.880 [2024-04-18 19:29:14.759942] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:59.139 [2024-04-18 19:29:14.869764] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:59.139 [2024-04-18 19:29:14.869901] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:59.139 [2024-04-18 19:29:14.869913] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:35:59.139 19:29:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:59.139 19:29:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:59.139 19:29:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:59.139 19:29:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:35:59.397 19:29:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:35:59.397 19:29:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:35:59.397 19:29:15 -- bdev/bdev_raid.sh@287 -- # killprocess 130833 00:35:59.397 19:29:15 -- common/autotest_common.sh@936 -- # '[' -z 130833 ']' 00:35:59.397 19:29:15 -- common/autotest_common.sh@940 -- # kill -0 130833 00:35:59.397 19:29:15 -- common/autotest_common.sh@941 -- # uname 00:35:59.397 19:29:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:59.397 19:29:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130833 00:35:59.397 killing process with pid 130833 00:35:59.397 19:29:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:59.397 19:29:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:59.397 19:29:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130833' 00:35:59.397 19:29:15 -- common/autotest_common.sh@955 -- # kill 130833 00:35:59.397 19:29:15 -- common/autotest_common.sh@960 -- # wait 130833 00:35:59.397 [2024-04-18 19:29:15.243295] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:59.397 [2024-04-18 19:29:15.243430] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:00.771 ************************************ 00:36:00.771 END TEST raid_state_function_test 00:36:00.772 ************************************ 00:36:00.772 19:29:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:36:00.772 00:36:00.772 real 0m16.774s 00:36:00.772 user 0m29.721s 00:36:00.772 sys 0m2.023s 00:36:00.772 19:29:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:00.772 19:29:16 -- common/autotest_common.sh@10 -- # set +x 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:36:01.031 19:29:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:36:01.031 19:29:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:01.031 19:29:16 -- common/autotest_common.sh@10 -- # set +x 00:36:01.031 ************************************ 00:36:01.031 START TEST raid_state_function_test_sb 00:36:01.031 ************************************ 00:36:01.031 19:29:16 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 4 true 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=131315 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131315' 00:36:01.031 Process raid pid: 131315 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:01.031 19:29:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131315 /var/tmp/spdk-raid.sock 00:36:01.031 19:29:16 -- common/autotest_common.sh@817 -- # '[' -z 131315 ']' 00:36:01.031 19:29:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:01.031 19:29:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:01.031 19:29:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:01.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:01.031 19:29:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:01.031 19:29:16 -- common/autotest_common.sh@10 -- # set +x 00:36:01.031 [2024-04-18 19:29:16.855470] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:36:01.031 [2024-04-18 19:29:16.855859] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.290 [2024-04-18 19:29:17.039581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.549 [2024-04-18 19:29:17.320001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.808 [2024-04-18 19:29:17.555933] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:02.066 19:29:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:02.066 19:29:17 -- common/autotest_common.sh@850 -- # return 0 00:36:02.066 19:29:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:02.325 [2024-04-18 19:29:18.154103] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:02.325 [2024-04-18 19:29:18.154353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:02.325 [2024-04-18 19:29:18.154441] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:02.325 [2024-04-18 19:29:18.154493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:02.325 [2024-04-18 19:29:18.154519] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:02.325 [2024-04-18 19:29:18.154577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:02.325 [2024-04-18 19:29:18.154659] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:02.325 [2024-04-18 19:29:18.154710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:02.325 19:29:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:02.584 19:29:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:02.584 "name": "Existed_Raid", 00:36:02.584 "uuid": "6e2755ab-b085-456b-8132-1a5da27c8d05", 00:36:02.584 "strip_size_kb": 0, 00:36:02.584 "state": "configuring", 00:36:02.585 "raid_level": "raid1", 00:36:02.585 "superblock": true, 00:36:02.585 "num_base_bdevs": 4, 00:36:02.585 "num_base_bdevs_discovered": 0, 00:36:02.585 "num_base_bdevs_operational": 4, 00:36:02.585 "base_bdevs_list": [ 00:36:02.585 { 00:36:02.585 "name": "BaseBdev1", 00:36:02.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.585 "is_configured": false, 00:36:02.585 "data_offset": 0, 00:36:02.585 "data_size": 0 00:36:02.585 }, 00:36:02.585 { 00:36:02.585 "name": "BaseBdev2", 00:36:02.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.585 "is_configured": false, 00:36:02.585 "data_offset": 0, 00:36:02.585 "data_size": 0 00:36:02.585 }, 00:36:02.585 { 00:36:02.585 "name": "BaseBdev3", 00:36:02.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.585 "is_configured": false, 00:36:02.585 "data_offset": 0, 00:36:02.585 "data_size": 0 00:36:02.585 }, 00:36:02.585 { 00:36:02.585 "name": "BaseBdev4", 00:36:02.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.585 "is_configured": false, 00:36:02.585 "data_offset": 0, 00:36:02.585 "data_size": 0 00:36:02.585 } 00:36:02.585 ] 00:36:02.585 }' 00:36:02.585 19:29:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:02.585 19:29:18 -- common/autotest_common.sh@10 -- # set +x 00:36:03.520 19:29:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:03.520 [2024-04-18 19:29:19.394214] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:03.520 [2024-04-18 19:29:19.394428] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:36:03.520 19:29:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:03.802 [2024-04-18 19:29:19.674315] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:03.802 [2024-04-18 19:29:19.674574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:03.802 [2024-04-18 19:29:19.674670] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:03.802 [2024-04-18 19:29:19.674726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:03.802 [2024-04-18 19:29:19.674795] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:03.802 [2024-04-18 19:29:19.674949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:03.802 [2024-04-18 19:29:19.675024] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:03.802 [2024-04-18 19:29:19.675076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:03.802 19:29:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:04.062 [2024-04-18 19:29:19.924425] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:04.062 BaseBdev1 00:36:04.062 19:29:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:36:04.062 19:29:19 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:36:04.062 19:29:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:04.062 19:29:19 -- common/autotest_common.sh@887 -- # local i 00:36:04.062 19:29:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:04.062 19:29:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:04.062 19:29:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:04.320 19:29:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:04.581 [ 00:36:04.581 { 00:36:04.581 "name": "BaseBdev1", 00:36:04.581 "aliases": [ 00:36:04.581 "58210bbe-47a7-494f-b920-5dcdfdd256ed" 00:36:04.581 ], 00:36:04.581 "product_name": "Malloc disk", 00:36:04.581 "block_size": 512, 00:36:04.581 "num_blocks": 65536, 00:36:04.581 "uuid": "58210bbe-47a7-494f-b920-5dcdfdd256ed", 00:36:04.581 "assigned_rate_limits": { 00:36:04.581 "rw_ios_per_sec": 0, 00:36:04.581 "rw_mbytes_per_sec": 0, 00:36:04.581 "r_mbytes_per_sec": 0, 00:36:04.581 "w_mbytes_per_sec": 0 00:36:04.581 }, 00:36:04.581 "claimed": true, 00:36:04.581 "claim_type": "exclusive_write", 00:36:04.581 "zoned": false, 00:36:04.581 "supported_io_types": { 00:36:04.581 "read": true, 00:36:04.581 "write": true, 00:36:04.581 "unmap": true, 00:36:04.581 "write_zeroes": true, 00:36:04.581 "flush": true, 00:36:04.581 "reset": true, 00:36:04.581 "compare": false, 00:36:04.581 "compare_and_write": false, 00:36:04.581 "abort": true, 00:36:04.581 "nvme_admin": false, 00:36:04.581 "nvme_io": false 00:36:04.581 }, 00:36:04.581 "memory_domains": [ 00:36:04.581 { 00:36:04.581 "dma_device_id": "system", 00:36:04.581 "dma_device_type": 1 00:36:04.581 }, 00:36:04.581 { 00:36:04.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:04.581 "dma_device_type": 2 00:36:04.581 } 00:36:04.581 ], 00:36:04.581 "driver_specific": {} 00:36:04.581 } 00:36:04.581 ] 00:36:04.581 19:29:20 -- common/autotest_common.sh@893 -- # return 0 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:04.581 19:29:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:04.841 19:29:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:04.841 "name": "Existed_Raid", 00:36:04.841 "uuid": "400b8213-8273-4747-8fd8-946b4467d166", 00:36:04.841 "strip_size_kb": 0, 00:36:04.841 "state": "configuring", 00:36:04.841 "raid_level": "raid1", 00:36:04.841 "superblock": true, 00:36:04.841 "num_base_bdevs": 4, 00:36:04.841 "num_base_bdevs_discovered": 1, 00:36:04.841 "num_base_bdevs_operational": 4, 00:36:04.841 "base_bdevs_list": [ 00:36:04.841 { 00:36:04.841 "name": "BaseBdev1", 00:36:04.841 "uuid": "58210bbe-47a7-494f-b920-5dcdfdd256ed", 00:36:04.841 "is_configured": true, 00:36:04.841 "data_offset": 2048, 00:36:04.841 "data_size": 63488 00:36:04.841 }, 00:36:04.841 { 00:36:04.841 "name": "BaseBdev2", 00:36:04.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.841 "is_configured": false, 00:36:04.841 "data_offset": 0, 00:36:04.841 "data_size": 0 00:36:04.841 }, 00:36:04.841 { 00:36:04.841 "name": "BaseBdev3", 00:36:04.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.841 "is_configured": false, 00:36:04.841 "data_offset": 0, 00:36:04.841 "data_size": 0 00:36:04.841 }, 00:36:04.841 { 00:36:04.841 "name": "BaseBdev4", 00:36:04.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.841 "is_configured": false, 00:36:04.841 "data_offset": 0, 00:36:04.841 "data_size": 0 00:36:04.841 } 00:36:04.841 ] 00:36:04.841 }' 00:36:04.841 19:29:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:04.841 19:29:20 -- common/autotest_common.sh@10 -- # set +x 00:36:05.775 19:29:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:05.775 [2024-04-18 19:29:21.672902] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:05.775 [2024-04-18 19:29:21.673111] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:36:05.775 19:29:21 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:36:05.775 19:29:21 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:06.413 19:29:22 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:06.672 BaseBdev1 00:36:06.672 19:29:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:36:06.672 19:29:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:36:06.672 19:29:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:06.672 19:29:22 -- common/autotest_common.sh@887 -- # local i 00:36:06.672 19:29:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:06.672 19:29:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:06.672 19:29:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:06.930 19:29:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:07.188 [ 00:36:07.188 { 00:36:07.188 "name": "BaseBdev1", 00:36:07.188 "aliases": [ 00:36:07.188 "a706e594-f7eb-4c54-b1ea-80863756d337" 00:36:07.188 ], 00:36:07.188 "product_name": "Malloc disk", 00:36:07.188 "block_size": 512, 00:36:07.188 "num_blocks": 65536, 00:36:07.188 "uuid": "a706e594-f7eb-4c54-b1ea-80863756d337", 00:36:07.188 "assigned_rate_limits": { 00:36:07.188 "rw_ios_per_sec": 0, 00:36:07.188 "rw_mbytes_per_sec": 0, 00:36:07.188 "r_mbytes_per_sec": 0, 00:36:07.188 "w_mbytes_per_sec": 0 00:36:07.188 }, 00:36:07.188 "claimed": false, 00:36:07.188 "zoned": false, 00:36:07.188 "supported_io_types": { 00:36:07.188 "read": true, 00:36:07.188 "write": true, 00:36:07.188 "unmap": true, 00:36:07.188 "write_zeroes": true, 00:36:07.188 "flush": true, 00:36:07.188 "reset": true, 00:36:07.188 "compare": false, 00:36:07.188 "compare_and_write": false, 00:36:07.188 "abort": true, 00:36:07.188 "nvme_admin": false, 00:36:07.188 "nvme_io": false 00:36:07.188 }, 00:36:07.188 "memory_domains": [ 00:36:07.188 { 00:36:07.188 "dma_device_id": "system", 00:36:07.188 "dma_device_type": 1 00:36:07.188 }, 00:36:07.188 { 00:36:07.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:07.188 "dma_device_type": 2 00:36:07.188 } 00:36:07.188 ], 00:36:07.188 "driver_specific": {} 00:36:07.188 } 00:36:07.188 ] 00:36:07.188 19:29:23 -- common/autotest_common.sh@893 -- # return 0 00:36:07.188 19:29:23 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:07.446 [2024-04-18 19:29:23.258232] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:07.446 [2024-04-18 19:29:23.260651] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:07.446 [2024-04-18 19:29:23.260851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:07.446 [2024-04-18 19:29:23.260937] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:07.446 [2024-04-18 19:29:23.260994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:07.446 [2024-04-18 19:29:23.261141] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:07.446 [2024-04-18 19:29:23.261190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:07.446 19:29:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:36:07.446 19:29:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:07.446 19:29:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:07.446 19:29:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:07.447 19:29:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:07.703 19:29:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:07.703 "name": "Existed_Raid", 00:36:07.703 "uuid": "02abb6dd-760f-414c-a48c-ec99576d39a1", 00:36:07.703 "strip_size_kb": 0, 00:36:07.703 "state": "configuring", 00:36:07.703 "raid_level": "raid1", 00:36:07.703 "superblock": true, 00:36:07.703 "num_base_bdevs": 4, 00:36:07.703 "num_base_bdevs_discovered": 1, 00:36:07.703 "num_base_bdevs_operational": 4, 00:36:07.703 "base_bdevs_list": [ 00:36:07.703 { 00:36:07.703 "name": "BaseBdev1", 00:36:07.703 "uuid": "a706e594-f7eb-4c54-b1ea-80863756d337", 00:36:07.703 "is_configured": true, 00:36:07.703 "data_offset": 2048, 00:36:07.703 "data_size": 63488 00:36:07.703 }, 00:36:07.703 { 00:36:07.703 "name": "BaseBdev2", 00:36:07.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.703 "is_configured": false, 00:36:07.703 "data_offset": 0, 00:36:07.703 "data_size": 0 00:36:07.703 }, 00:36:07.703 { 00:36:07.703 "name": "BaseBdev3", 00:36:07.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.703 "is_configured": false, 00:36:07.703 "data_offset": 0, 00:36:07.703 "data_size": 0 00:36:07.703 }, 00:36:07.703 { 00:36:07.703 "name": "BaseBdev4", 00:36:07.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.703 "is_configured": false, 00:36:07.703 "data_offset": 0, 00:36:07.703 "data_size": 0 00:36:07.703 } 00:36:07.703 ] 00:36:07.703 }' 00:36:07.703 19:29:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:07.703 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:36:08.636 19:29:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:08.636 [2024-04-18 19:29:24.560356] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:08.636 BaseBdev2 00:36:08.892 19:29:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:36:08.893 19:29:24 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:36:08.893 19:29:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:08.893 19:29:24 -- common/autotest_common.sh@887 -- # local i 00:36:08.893 19:29:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:08.893 19:29:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:08.893 19:29:24 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:08.893 19:29:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:09.150 [ 00:36:09.150 { 00:36:09.150 "name": "BaseBdev2", 00:36:09.150 "aliases": [ 00:36:09.150 "92ba1779-10ba-49a2-938a-04e79bee8865" 00:36:09.150 ], 00:36:09.150 "product_name": "Malloc disk", 00:36:09.150 "block_size": 512, 00:36:09.150 "num_blocks": 65536, 00:36:09.150 "uuid": "92ba1779-10ba-49a2-938a-04e79bee8865", 00:36:09.150 "assigned_rate_limits": { 00:36:09.150 "rw_ios_per_sec": 0, 00:36:09.150 "rw_mbytes_per_sec": 0, 00:36:09.150 "r_mbytes_per_sec": 0, 00:36:09.150 "w_mbytes_per_sec": 0 00:36:09.150 }, 00:36:09.150 "claimed": true, 00:36:09.150 "claim_type": "exclusive_write", 00:36:09.150 "zoned": false, 00:36:09.150 "supported_io_types": { 00:36:09.150 "read": true, 00:36:09.150 "write": true, 00:36:09.150 "unmap": true, 00:36:09.150 "write_zeroes": true, 00:36:09.150 "flush": true, 00:36:09.150 "reset": true, 00:36:09.150 "compare": false, 00:36:09.150 "compare_and_write": false, 00:36:09.150 "abort": true, 00:36:09.150 "nvme_admin": false, 00:36:09.150 "nvme_io": false 00:36:09.150 }, 00:36:09.150 "memory_domains": [ 00:36:09.150 { 00:36:09.150 "dma_device_id": "system", 00:36:09.151 "dma_device_type": 1 00:36:09.151 }, 00:36:09.151 { 00:36:09.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:09.151 "dma_device_type": 2 00:36:09.151 } 00:36:09.151 ], 00:36:09.151 "driver_specific": {} 00:36:09.151 } 00:36:09.151 ] 00:36:09.151 19:29:25 -- common/autotest_common.sh@893 -- # return 0 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.151 19:29:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:09.716 19:29:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:09.716 "name": "Existed_Raid", 00:36:09.716 "uuid": "02abb6dd-760f-414c-a48c-ec99576d39a1", 00:36:09.716 "strip_size_kb": 0, 00:36:09.716 "state": "configuring", 00:36:09.716 "raid_level": "raid1", 00:36:09.716 "superblock": true, 00:36:09.716 "num_base_bdevs": 4, 00:36:09.716 "num_base_bdevs_discovered": 2, 00:36:09.716 "num_base_bdevs_operational": 4, 00:36:09.716 "base_bdevs_list": [ 00:36:09.716 { 00:36:09.716 "name": "BaseBdev1", 00:36:09.716 "uuid": "a706e594-f7eb-4c54-b1ea-80863756d337", 00:36:09.716 "is_configured": true, 00:36:09.716 "data_offset": 2048, 00:36:09.716 "data_size": 63488 00:36:09.716 }, 00:36:09.716 { 00:36:09.716 "name": "BaseBdev2", 00:36:09.716 "uuid": "92ba1779-10ba-49a2-938a-04e79bee8865", 00:36:09.716 "is_configured": true, 00:36:09.716 "data_offset": 2048, 00:36:09.716 "data_size": 63488 00:36:09.716 }, 00:36:09.716 { 00:36:09.716 "name": "BaseBdev3", 00:36:09.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.716 "is_configured": false, 00:36:09.716 "data_offset": 0, 00:36:09.716 "data_size": 0 00:36:09.716 }, 00:36:09.716 { 00:36:09.716 "name": "BaseBdev4", 00:36:09.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.716 "is_configured": false, 00:36:09.716 "data_offset": 0, 00:36:09.716 "data_size": 0 00:36:09.716 } 00:36:09.716 ] 00:36:09.716 }' 00:36:09.716 19:29:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:09.716 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:36:10.280 19:29:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:10.538 [2024-04-18 19:29:26.317364] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:10.538 BaseBdev3 00:36:10.538 19:29:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:36:10.538 19:29:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:36:10.538 19:29:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:10.538 19:29:26 -- common/autotest_common.sh@887 -- # local i 00:36:10.538 19:29:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:10.538 19:29:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:10.538 19:29:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:10.796 19:29:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:11.055 [ 00:36:11.055 { 00:36:11.055 "name": "BaseBdev3", 00:36:11.055 "aliases": [ 00:36:11.055 "e1c9e2be-9f48-414f-b96e-681871064a7b" 00:36:11.055 ], 00:36:11.055 "product_name": "Malloc disk", 00:36:11.055 "block_size": 512, 00:36:11.055 "num_blocks": 65536, 00:36:11.055 "uuid": "e1c9e2be-9f48-414f-b96e-681871064a7b", 00:36:11.055 "assigned_rate_limits": { 00:36:11.055 "rw_ios_per_sec": 0, 00:36:11.055 "rw_mbytes_per_sec": 0, 00:36:11.055 "r_mbytes_per_sec": 0, 00:36:11.055 "w_mbytes_per_sec": 0 00:36:11.055 }, 00:36:11.055 "claimed": true, 00:36:11.055 "claim_type": "exclusive_write", 00:36:11.055 "zoned": false, 00:36:11.055 "supported_io_types": { 00:36:11.055 "read": true, 00:36:11.055 "write": true, 00:36:11.055 "unmap": true, 00:36:11.055 "write_zeroes": true, 00:36:11.055 "flush": true, 00:36:11.055 "reset": true, 00:36:11.055 "compare": false, 00:36:11.055 "compare_and_write": false, 00:36:11.055 "abort": true, 00:36:11.055 "nvme_admin": false, 00:36:11.055 "nvme_io": false 00:36:11.055 }, 00:36:11.055 "memory_domains": [ 00:36:11.055 { 00:36:11.055 "dma_device_id": "system", 00:36:11.055 "dma_device_type": 1 00:36:11.055 }, 00:36:11.055 { 00:36:11.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:11.055 "dma_device_type": 2 00:36:11.055 } 00:36:11.055 ], 00:36:11.055 "driver_specific": {} 00:36:11.055 } 00:36:11.055 ] 00:36:11.055 19:29:26 -- common/autotest_common.sh@893 -- # return 0 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.055 19:29:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:11.313 19:29:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:11.313 "name": "Existed_Raid", 00:36:11.313 "uuid": "02abb6dd-760f-414c-a48c-ec99576d39a1", 00:36:11.314 "strip_size_kb": 0, 00:36:11.314 "state": "configuring", 00:36:11.314 "raid_level": "raid1", 00:36:11.314 "superblock": true, 00:36:11.314 "num_base_bdevs": 4, 00:36:11.314 "num_base_bdevs_discovered": 3, 00:36:11.314 "num_base_bdevs_operational": 4, 00:36:11.314 "base_bdevs_list": [ 00:36:11.314 { 00:36:11.314 "name": "BaseBdev1", 00:36:11.314 "uuid": "a706e594-f7eb-4c54-b1ea-80863756d337", 00:36:11.314 "is_configured": true, 00:36:11.314 "data_offset": 2048, 00:36:11.314 "data_size": 63488 00:36:11.314 }, 00:36:11.314 { 00:36:11.314 "name": "BaseBdev2", 00:36:11.314 "uuid": "92ba1779-10ba-49a2-938a-04e79bee8865", 00:36:11.314 "is_configured": true, 00:36:11.314 "data_offset": 2048, 00:36:11.314 "data_size": 63488 00:36:11.314 }, 00:36:11.314 { 00:36:11.314 "name": "BaseBdev3", 00:36:11.314 "uuid": "e1c9e2be-9f48-414f-b96e-681871064a7b", 00:36:11.314 "is_configured": true, 00:36:11.314 "data_offset": 2048, 00:36:11.314 "data_size": 63488 00:36:11.314 }, 00:36:11.314 { 00:36:11.314 "name": "BaseBdev4", 00:36:11.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.314 "is_configured": false, 00:36:11.314 "data_offset": 0, 00:36:11.314 "data_size": 0 00:36:11.314 } 00:36:11.314 ] 00:36:11.314 }' 00:36:11.314 19:29:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:11.314 19:29:27 -- common/autotest_common.sh@10 -- # set +x 00:36:12.249 19:29:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:36:12.249 [2024-04-18 19:29:28.153974] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:12.249 [2024-04-18 19:29:28.154399] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:36:12.249 [2024-04-18 19:29:28.154515] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:12.249 [2024-04-18 19:29:28.154722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:36:12.249 [2024-04-18 19:29:28.155157] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:36:12.249 [2024-04-18 19:29:28.155268] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:36:12.249 [2024-04-18 19:29:28.155535] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:12.249 BaseBdev4 00:36:12.249 19:29:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:36:12.249 19:29:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:36:12.249 19:29:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:36:12.249 19:29:28 -- common/autotest_common.sh@887 -- # local i 00:36:12.249 19:29:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:36:12.249 19:29:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:36:12.250 19:29:28 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:12.817 19:29:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:13.075 [ 00:36:13.075 { 00:36:13.075 "name": "BaseBdev4", 00:36:13.075 "aliases": [ 00:36:13.075 "efdaa1b7-9366-4d6e-b51d-6fa5937527af" 00:36:13.075 ], 00:36:13.075 "product_name": "Malloc disk", 00:36:13.075 "block_size": 512, 00:36:13.075 "num_blocks": 65536, 00:36:13.075 "uuid": "efdaa1b7-9366-4d6e-b51d-6fa5937527af", 00:36:13.075 "assigned_rate_limits": { 00:36:13.075 "rw_ios_per_sec": 0, 00:36:13.075 "rw_mbytes_per_sec": 0, 00:36:13.075 "r_mbytes_per_sec": 0, 00:36:13.075 "w_mbytes_per_sec": 0 00:36:13.075 }, 00:36:13.075 "claimed": true, 00:36:13.075 "claim_type": "exclusive_write", 00:36:13.075 "zoned": false, 00:36:13.075 "supported_io_types": { 00:36:13.075 "read": true, 00:36:13.075 "write": true, 00:36:13.075 "unmap": true, 00:36:13.075 "write_zeroes": true, 00:36:13.075 "flush": true, 00:36:13.075 "reset": true, 00:36:13.075 "compare": false, 00:36:13.075 "compare_and_write": false, 00:36:13.075 "abort": true, 00:36:13.075 "nvme_admin": false, 00:36:13.075 "nvme_io": false 00:36:13.075 }, 00:36:13.075 "memory_domains": [ 00:36:13.075 { 00:36:13.075 "dma_device_id": "system", 00:36:13.075 "dma_device_type": 1 00:36:13.075 }, 00:36:13.075 { 00:36:13.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:13.075 "dma_device_type": 2 00:36:13.075 } 00:36:13.075 ], 00:36:13.075 "driver_specific": {} 00:36:13.075 } 00:36:13.075 ] 00:36:13.075 19:29:28 -- common/autotest_common.sh@893 -- # return 0 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.075 19:29:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:13.334 19:29:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:13.334 "name": "Existed_Raid", 00:36:13.334 "uuid": "02abb6dd-760f-414c-a48c-ec99576d39a1", 00:36:13.334 "strip_size_kb": 0, 00:36:13.334 "state": "online", 00:36:13.334 "raid_level": "raid1", 00:36:13.334 "superblock": true, 00:36:13.334 "num_base_bdevs": 4, 00:36:13.334 "num_base_bdevs_discovered": 4, 00:36:13.334 "num_base_bdevs_operational": 4, 00:36:13.334 "base_bdevs_list": [ 00:36:13.334 { 00:36:13.334 "name": "BaseBdev1", 00:36:13.334 "uuid": "a706e594-f7eb-4c54-b1ea-80863756d337", 00:36:13.334 "is_configured": true, 00:36:13.334 "data_offset": 2048, 00:36:13.334 "data_size": 63488 00:36:13.334 }, 00:36:13.334 { 00:36:13.334 "name": "BaseBdev2", 00:36:13.334 "uuid": "92ba1779-10ba-49a2-938a-04e79bee8865", 00:36:13.334 "is_configured": true, 00:36:13.334 "data_offset": 2048, 00:36:13.334 "data_size": 63488 00:36:13.334 }, 00:36:13.334 { 00:36:13.334 "name": "BaseBdev3", 00:36:13.334 "uuid": "e1c9e2be-9f48-414f-b96e-681871064a7b", 00:36:13.334 "is_configured": true, 00:36:13.334 "data_offset": 2048, 00:36:13.334 "data_size": 63488 00:36:13.334 }, 00:36:13.334 { 00:36:13.334 "name": "BaseBdev4", 00:36:13.334 "uuid": "efdaa1b7-9366-4d6e-b51d-6fa5937527af", 00:36:13.334 "is_configured": true, 00:36:13.334 "data_offset": 2048, 00:36:13.334 "data_size": 63488 00:36:13.334 } 00:36:13.334 ] 00:36:13.334 }' 00:36:13.334 19:29:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:13.334 19:29:29 -- common/autotest_common.sh@10 -- # set +x 00:36:13.900 19:29:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:14.158 [2024-04-18 19:29:29.994522] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@196 -- # return 0 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.417 19:29:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:14.675 19:29:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:14.675 "name": "Existed_Raid", 00:36:14.675 "uuid": "02abb6dd-760f-414c-a48c-ec99576d39a1", 00:36:14.675 "strip_size_kb": 0, 00:36:14.675 "state": "online", 00:36:14.675 "raid_level": "raid1", 00:36:14.675 "superblock": true, 00:36:14.675 "num_base_bdevs": 4, 00:36:14.675 "num_base_bdevs_discovered": 3, 00:36:14.675 "num_base_bdevs_operational": 3, 00:36:14.675 "base_bdevs_list": [ 00:36:14.675 { 00:36:14.675 "name": null, 00:36:14.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.675 "is_configured": false, 00:36:14.675 "data_offset": 2048, 00:36:14.675 "data_size": 63488 00:36:14.675 }, 00:36:14.675 { 00:36:14.675 "name": "BaseBdev2", 00:36:14.675 "uuid": "92ba1779-10ba-49a2-938a-04e79bee8865", 00:36:14.675 "is_configured": true, 00:36:14.675 "data_offset": 2048, 00:36:14.675 "data_size": 63488 00:36:14.675 }, 00:36:14.675 { 00:36:14.675 "name": "BaseBdev3", 00:36:14.675 "uuid": "e1c9e2be-9f48-414f-b96e-681871064a7b", 00:36:14.675 "is_configured": true, 00:36:14.675 "data_offset": 2048, 00:36:14.675 "data_size": 63488 00:36:14.675 }, 00:36:14.675 { 00:36:14.675 "name": "BaseBdev4", 00:36:14.675 "uuid": "efdaa1b7-9366-4d6e-b51d-6fa5937527af", 00:36:14.675 "is_configured": true, 00:36:14.675 "data_offset": 2048, 00:36:14.675 "data_size": 63488 00:36:14.675 } 00:36:14.675 ] 00:36:14.675 }' 00:36:14.675 19:29:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:14.675 19:29:30 -- common/autotest_common.sh@10 -- # set +x 00:36:15.242 19:29:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:36:15.242 19:29:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:15.242 19:29:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:15.242 19:29:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:15.501 19:29:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:15.501 19:29:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:15.501 19:29:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:15.759 [2024-04-18 19:29:31.672111] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:16.017 19:29:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:16.017 19:29:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:16.017 19:29:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.017 19:29:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:16.275 19:29:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:16.275 19:29:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:16.275 19:29:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:36:16.533 [2024-04-18 19:29:32.392772] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:16.792 19:29:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:16.792 19:29:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:16.792 19:29:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.792 19:29:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:17.050 19:29:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:17.050 19:29:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:17.050 19:29:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:36:17.309 [2024-04-18 19:29:33.011558] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:17.309 [2024-04-18 19:29:33.011880] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:17.309 [2024-04-18 19:29:33.121128] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:17.309 [2024-04-18 19:29:33.121378] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:17.309 [2024-04-18 19:29:33.121514] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:36:17.309 19:29:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:17.309 19:29:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:17.309 19:29:33 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.309 19:29:33 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:36:17.567 19:29:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:36:17.567 19:29:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:36:17.567 19:29:33 -- bdev/bdev_raid.sh@287 -- # killprocess 131315 00:36:17.567 19:29:33 -- common/autotest_common.sh@936 -- # '[' -z 131315 ']' 00:36:17.567 19:29:33 -- common/autotest_common.sh@940 -- # kill -0 131315 00:36:17.567 19:29:33 -- common/autotest_common.sh@941 -- # uname 00:36:17.567 19:29:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:17.567 19:29:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131315 00:36:17.567 killing process with pid 131315 00:36:17.567 19:29:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:17.567 19:29:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:17.567 19:29:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131315' 00:36:17.567 19:29:33 -- common/autotest_common.sh@955 -- # kill 131315 00:36:17.567 19:29:33 -- common/autotest_common.sh@960 -- # wait 131315 00:36:17.567 [2024-04-18 19:29:33.437110] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:17.567 [2024-04-18 19:29:33.437231] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:18.943 ************************************ 00:36:18.943 END TEST raid_state_function_test_sb 00:36:18.943 ************************************ 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@289 -- # return 0 00:36:18.943 00:36:18.943 real 0m17.976s 00:36:18.943 user 0m31.926s 00:36:18.943 sys 0m2.182s 00:36:18.943 19:29:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:18.943 19:29:34 -- common/autotest_common.sh@10 -- # set +x 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:36:18.943 19:29:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:36:18.943 19:29:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:18.943 19:29:34 -- common/autotest_common.sh@10 -- # set +x 00:36:18.943 ************************************ 00:36:18.943 START TEST raid_superblock_test 00:36:18.943 ************************************ 00:36:18.943 19:29:34 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 4 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@357 -- # raid_pid=131825 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131825 /var/tmp/spdk-raid.sock 00:36:18.943 19:29:34 -- common/autotest_common.sh@817 -- # '[' -z 131825 ']' 00:36:18.943 19:29:34 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:18.943 19:29:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:18.943 19:29:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:18.943 19:29:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:18.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:18.943 19:29:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:18.943 19:29:34 -- common/autotest_common.sh@10 -- # set +x 00:36:19.202 [2024-04-18 19:29:34.929495] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:36:19.202 [2024-04-18 19:29:34.929841] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131825 ] 00:36:19.202 [2024-04-18 19:29:35.107348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.461 [2024-04-18 19:29:35.372367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.720 [2024-04-18 19:29:35.570038] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:19.995 19:29:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:19.995 19:29:35 -- common/autotest_common.sh@850 -- # return 0 00:36:19.995 19:29:35 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:36:19.995 19:29:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:19.995 19:29:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:36:19.995 19:29:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:36:19.995 19:29:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:19.995 19:29:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:19.995 19:29:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:19.995 19:29:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:19.995 19:29:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:36:20.256 malloc1 00:36:20.256 19:29:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:20.514 [2024-04-18 19:29:36.358708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:20.514 [2024-04-18 19:29:36.358917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.514 [2024-04-18 19:29:36.358994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:36:20.514 [2024-04-18 19:29:36.359114] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.514 [2024-04-18 19:29:36.361577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.514 [2024-04-18 19:29:36.361718] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:20.514 pt1 00:36:20.514 19:29:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:20.514 19:29:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:20.514 19:29:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:36:20.514 19:29:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:36:20.514 19:29:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:20.514 19:29:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:20.514 19:29:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:20.514 19:29:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:20.514 19:29:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:36:20.772 malloc2 00:36:20.772 19:29:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:21.030 [2024-04-18 19:29:36.869685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:21.030 [2024-04-18 19:29:36.869955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:21.030 [2024-04-18 19:29:36.870026] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:36:21.030 [2024-04-18 19:29:36.870246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:21.030 [2024-04-18 19:29:36.872571] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:21.030 [2024-04-18 19:29:36.872726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:21.030 pt2 00:36:21.030 19:29:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:21.030 19:29:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:21.030 19:29:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:36:21.030 19:29:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:36:21.030 19:29:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:36:21.030 19:29:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:21.030 19:29:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:21.030 19:29:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:21.030 19:29:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:36:21.288 malloc3 00:36:21.288 19:29:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:21.547 [2024-04-18 19:29:37.298595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:21.547 [2024-04-18 19:29:37.298839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:21.547 [2024-04-18 19:29:37.298903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:21.547 [2024-04-18 19:29:37.299036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:21.547 [2024-04-18 19:29:37.301498] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:21.547 [2024-04-18 19:29:37.301658] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:21.547 pt3 00:36:21.547 19:29:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:21.547 19:29:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:21.547 19:29:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:36:21.547 19:29:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:36:21.547 19:29:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:36:21.547 19:29:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:21.547 19:29:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:21.547 19:29:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:21.547 19:29:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:36:21.805 malloc4 00:36:21.805 19:29:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:22.064 [2024-04-18 19:29:37.839907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:22.064 [2024-04-18 19:29:37.840154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:22.064 [2024-04-18 19:29:37.840285] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:36:22.064 [2024-04-18 19:29:37.840462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:22.064 [2024-04-18 19:29:37.842955] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:22.064 [2024-04-18 19:29:37.843114] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:22.064 pt4 00:36:22.064 19:29:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:22.064 19:29:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:22.064 19:29:37 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:36:22.323 [2024-04-18 19:29:38.036088] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:22.323 [2024-04-18 19:29:38.038232] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:22.323 [2024-04-18 19:29:38.038417] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:22.323 [2024-04-18 19:29:38.038559] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:22.323 [2024-04-18 19:29:38.038860] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:36:22.323 [2024-04-18 19:29:38.038962] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:22.323 [2024-04-18 19:29:38.039136] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:36:22.323 [2024-04-18 19:29:38.039589] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:36:22.323 [2024-04-18 19:29:38.039694] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:36:22.323 [2024-04-18 19:29:38.039983] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.323 19:29:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:22.645 19:29:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:22.645 "name": "raid_bdev1", 00:36:22.645 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:22.645 "strip_size_kb": 0, 00:36:22.645 "state": "online", 00:36:22.645 "raid_level": "raid1", 00:36:22.645 "superblock": true, 00:36:22.645 "num_base_bdevs": 4, 00:36:22.645 "num_base_bdevs_discovered": 4, 00:36:22.645 "num_base_bdevs_operational": 4, 00:36:22.645 "base_bdevs_list": [ 00:36:22.645 { 00:36:22.645 "name": "pt1", 00:36:22.645 "uuid": "e2533fa5-6f6d-520c-a46a-a10a557731d6", 00:36:22.645 "is_configured": true, 00:36:22.645 "data_offset": 2048, 00:36:22.645 "data_size": 63488 00:36:22.645 }, 00:36:22.645 { 00:36:22.645 "name": "pt2", 00:36:22.645 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:22.645 "is_configured": true, 00:36:22.645 "data_offset": 2048, 00:36:22.645 "data_size": 63488 00:36:22.645 }, 00:36:22.645 { 00:36:22.645 "name": "pt3", 00:36:22.645 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:22.645 "is_configured": true, 00:36:22.645 "data_offset": 2048, 00:36:22.645 "data_size": 63488 00:36:22.645 }, 00:36:22.645 { 00:36:22.645 "name": "pt4", 00:36:22.645 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:22.645 "is_configured": true, 00:36:22.645 "data_offset": 2048, 00:36:22.645 "data_size": 63488 00:36:22.645 } 00:36:22.645 ] 00:36:22.645 }' 00:36:22.645 19:29:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:22.645 19:29:38 -- common/autotest_common.sh@10 -- # set +x 00:36:23.213 19:29:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:23.213 19:29:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:36:23.471 [2024-04-18 19:29:39.344647] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:23.471 19:29:39 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=543a5466-0301-4c9a-9bd1-b849f9e8aca3 00:36:23.471 19:29:39 -- bdev/bdev_raid.sh@380 -- # '[' -z 543a5466-0301-4c9a-9bd1-b849f9e8aca3 ']' 00:36:23.471 19:29:39 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:23.730 [2024-04-18 19:29:39.628380] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:23.730 [2024-04-18 19:29:39.628575] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:23.730 [2024-04-18 19:29:39.628746] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:23.730 [2024-04-18 19:29:39.628906] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:23.730 [2024-04-18 19:29:39.628985] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:36:23.730 19:29:39 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:23.730 19:29:39 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:36:24.296 19:29:39 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:36:24.296 19:29:39 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:36:24.296 19:29:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:24.296 19:29:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:24.296 19:29:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:24.296 19:29:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:24.554 19:29:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:24.554 19:29:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:36:24.813 19:29:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:24.813 19:29:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:36:25.072 19:29:40 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:25.072 19:29:40 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:25.331 19:29:41 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:36:25.331 19:29:41 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:36:25.331 19:29:41 -- common/autotest_common.sh@638 -- # local es=0 00:36:25.331 19:29:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:36:25.331 19:29:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:25.331 19:29:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:25.331 19:29:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:25.331 19:29:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:25.331 19:29:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:25.331 19:29:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:25.331 19:29:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:25.331 19:29:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:25.331 19:29:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:36:25.590 [2024-04-18 19:29:41.332652] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:25.590 [2024-04-18 19:29:41.334716] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:25.590 [2024-04-18 19:29:41.334877] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:36:25.590 [2024-04-18 19:29:41.334977] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:36:25.590 [2024-04-18 19:29:41.335080] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:36:25.590 [2024-04-18 19:29:41.335219] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:36:25.590 [2024-04-18 19:29:41.335269] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:36:25.590 [2024-04-18 19:29:41.335355] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:36:25.590 [2024-04-18 19:29:41.335427] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:25.590 [2024-04-18 19:29:41.335513] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:36:25.590 request: 00:36:25.590 { 00:36:25.590 "name": "raid_bdev1", 00:36:25.590 "raid_level": "raid1", 00:36:25.590 "base_bdevs": [ 00:36:25.590 "malloc1", 00:36:25.590 "malloc2", 00:36:25.590 "malloc3", 00:36:25.590 "malloc4" 00:36:25.590 ], 00:36:25.590 "superblock": false, 00:36:25.590 "method": "bdev_raid_create", 00:36:25.590 "req_id": 1 00:36:25.590 } 00:36:25.590 Got JSON-RPC error response 00:36:25.590 response: 00:36:25.590 { 00:36:25.590 "code": -17, 00:36:25.590 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:25.590 } 00:36:25.590 19:29:41 -- common/autotest_common.sh@641 -- # es=1 00:36:25.590 19:29:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:36:25.590 19:29:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:36:25.590 19:29:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:36:25.590 19:29:41 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:25.590 19:29:41 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:36:25.849 19:29:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:36:25.849 19:29:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:36:25.849 19:29:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:26.107 [2024-04-18 19:29:41.800700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:26.107 [2024-04-18 19:29:41.800971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:26.107 [2024-04-18 19:29:41.801033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:26.107 [2024-04-18 19:29:41.801129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:26.107 [2024-04-18 19:29:41.803571] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:26.107 [2024-04-18 19:29:41.803748] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:26.107 [2024-04-18 19:29:41.804059] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:36:26.107 [2024-04-18 19:29:41.804198] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:26.107 pt1 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:26.107 19:29:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:26.365 19:29:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:26.365 "name": "raid_bdev1", 00:36:26.365 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:26.365 "strip_size_kb": 0, 00:36:26.365 "state": "configuring", 00:36:26.365 "raid_level": "raid1", 00:36:26.365 "superblock": true, 00:36:26.365 "num_base_bdevs": 4, 00:36:26.365 "num_base_bdevs_discovered": 1, 00:36:26.365 "num_base_bdevs_operational": 4, 00:36:26.365 "base_bdevs_list": [ 00:36:26.365 { 00:36:26.365 "name": "pt1", 00:36:26.365 "uuid": "e2533fa5-6f6d-520c-a46a-a10a557731d6", 00:36:26.365 "is_configured": true, 00:36:26.365 "data_offset": 2048, 00:36:26.365 "data_size": 63488 00:36:26.365 }, 00:36:26.365 { 00:36:26.365 "name": null, 00:36:26.365 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:26.365 "is_configured": false, 00:36:26.365 "data_offset": 2048, 00:36:26.365 "data_size": 63488 00:36:26.365 }, 00:36:26.365 { 00:36:26.365 "name": null, 00:36:26.365 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:26.365 "is_configured": false, 00:36:26.365 "data_offset": 2048, 00:36:26.365 "data_size": 63488 00:36:26.365 }, 00:36:26.365 { 00:36:26.365 "name": null, 00:36:26.365 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:26.365 "is_configured": false, 00:36:26.365 "data_offset": 2048, 00:36:26.365 "data_size": 63488 00:36:26.365 } 00:36:26.365 ] 00:36:26.365 }' 00:36:26.365 19:29:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:26.365 19:29:42 -- common/autotest_common.sh@10 -- # set +x 00:36:26.934 19:29:42 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:36:26.934 19:29:42 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:27.193 [2024-04-18 19:29:42.993004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:27.193 [2024-04-18 19:29:42.993307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:27.193 [2024-04-18 19:29:42.993381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:36:27.193 [2024-04-18 19:29:42.993566] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:27.193 [2024-04-18 19:29:42.994088] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:27.193 [2024-04-18 19:29:42.994251] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:27.193 [2024-04-18 19:29:42.994463] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:27.193 [2024-04-18 19:29:42.994584] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:27.193 pt2 00:36:27.193 19:29:43 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:27.497 [2024-04-18 19:29:43.229074] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.497 19:29:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.756 19:29:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:27.756 "name": "raid_bdev1", 00:36:27.756 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:27.756 "strip_size_kb": 0, 00:36:27.756 "state": "configuring", 00:36:27.756 "raid_level": "raid1", 00:36:27.756 "superblock": true, 00:36:27.756 "num_base_bdevs": 4, 00:36:27.756 "num_base_bdevs_discovered": 1, 00:36:27.756 "num_base_bdevs_operational": 4, 00:36:27.756 "base_bdevs_list": [ 00:36:27.756 { 00:36:27.756 "name": "pt1", 00:36:27.756 "uuid": "e2533fa5-6f6d-520c-a46a-a10a557731d6", 00:36:27.756 "is_configured": true, 00:36:27.756 "data_offset": 2048, 00:36:27.756 "data_size": 63488 00:36:27.756 }, 00:36:27.756 { 00:36:27.756 "name": null, 00:36:27.756 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:27.756 "is_configured": false, 00:36:27.756 "data_offset": 2048, 00:36:27.756 "data_size": 63488 00:36:27.756 }, 00:36:27.756 { 00:36:27.756 "name": null, 00:36:27.756 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:27.756 "is_configured": false, 00:36:27.756 "data_offset": 2048, 00:36:27.756 "data_size": 63488 00:36:27.756 }, 00:36:27.756 { 00:36:27.756 "name": null, 00:36:27.756 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:27.756 "is_configured": false, 00:36:27.756 "data_offset": 2048, 00:36:27.756 "data_size": 63488 00:36:27.756 } 00:36:27.756 ] 00:36:27.756 }' 00:36:27.756 19:29:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:27.756 19:29:43 -- common/autotest_common.sh@10 -- # set +x 00:36:28.323 19:29:44 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:36:28.323 19:29:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:28.323 19:29:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:28.581 [2024-04-18 19:29:44.481402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:28.581 [2024-04-18 19:29:44.481719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:28.581 [2024-04-18 19:29:44.481790] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:36:28.581 [2024-04-18 19:29:44.481893] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:28.581 [2024-04-18 19:29:44.482442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:28.581 [2024-04-18 19:29:44.482611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:28.581 [2024-04-18 19:29:44.482825] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:28.581 [2024-04-18 19:29:44.482956] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:28.581 pt2 00:36:28.581 19:29:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:28.581 19:29:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:28.581 19:29:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:29.146 [2024-04-18 19:29:44.789516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:29.146 [2024-04-18 19:29:44.789859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:29.146 [2024-04-18 19:29:44.790054] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:36:29.146 [2024-04-18 19:29:44.790197] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:29.146 [2024-04-18 19:29:44.790933] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:29.146 [2024-04-18 19:29:44.791184] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:29.146 [2024-04-18 19:29:44.791483] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:36:29.146 [2024-04-18 19:29:44.791621] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:29.146 pt3 00:36:29.146 19:29:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:29.146 19:29:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:29.146 19:29:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:29.146 [2024-04-18 19:29:45.021576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:29.146 [2024-04-18 19:29:45.021903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:29.146 [2024-04-18 19:29:45.022031] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:36:29.146 [2024-04-18 19:29:45.022128] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:29.146 [2024-04-18 19:29:45.022726] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:29.146 [2024-04-18 19:29:45.022911] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:29.146 [2024-04-18 19:29:45.023115] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:36:29.146 [2024-04-18 19:29:45.023224] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:29.146 [2024-04-18 19:29:45.023420] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:36:29.146 [2024-04-18 19:29:45.023515] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:29.146 [2024-04-18 19:29:45.023668] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:29.146 [2024-04-18 19:29:45.024142] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:36:29.146 [2024-04-18 19:29:45.024258] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:36:29.146 [2024-04-18 19:29:45.024479] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:29.146 pt4 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.146 19:29:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:29.713 19:29:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:29.713 "name": "raid_bdev1", 00:36:29.713 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:29.713 "strip_size_kb": 0, 00:36:29.713 "state": "online", 00:36:29.713 "raid_level": "raid1", 00:36:29.713 "superblock": true, 00:36:29.713 "num_base_bdevs": 4, 00:36:29.713 "num_base_bdevs_discovered": 4, 00:36:29.713 "num_base_bdevs_operational": 4, 00:36:29.713 "base_bdevs_list": [ 00:36:29.713 { 00:36:29.713 "name": "pt1", 00:36:29.713 "uuid": "e2533fa5-6f6d-520c-a46a-a10a557731d6", 00:36:29.713 "is_configured": true, 00:36:29.713 "data_offset": 2048, 00:36:29.713 "data_size": 63488 00:36:29.713 }, 00:36:29.713 { 00:36:29.713 "name": "pt2", 00:36:29.713 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:29.713 "is_configured": true, 00:36:29.713 "data_offset": 2048, 00:36:29.713 "data_size": 63488 00:36:29.713 }, 00:36:29.713 { 00:36:29.713 "name": "pt3", 00:36:29.713 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:29.713 "is_configured": true, 00:36:29.713 "data_offset": 2048, 00:36:29.713 "data_size": 63488 00:36:29.713 }, 00:36:29.713 { 00:36:29.713 "name": "pt4", 00:36:29.713 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:29.713 "is_configured": true, 00:36:29.713 "data_offset": 2048, 00:36:29.713 "data_size": 63488 00:36:29.713 } 00:36:29.713 ] 00:36:29.713 }' 00:36:29.713 19:29:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:29.713 19:29:45 -- common/autotest_common.sh@10 -- # set +x 00:36:30.281 19:29:46 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:30.281 19:29:46 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:36:30.549 [2024-04-18 19:29:46.342118] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:30.549 19:29:46 -- bdev/bdev_raid.sh@430 -- # '[' 543a5466-0301-4c9a-9bd1-b849f9e8aca3 '!=' 543a5466-0301-4c9a-9bd1-b849f9e8aca3 ']' 00:36:30.549 19:29:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:36:30.549 19:29:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:36:30.549 19:29:46 -- bdev/bdev_raid.sh@196 -- # return 0 00:36:30.549 19:29:46 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:30.819 [2024-04-18 19:29:46.545938] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:30.819 19:29:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.096 19:29:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:31.096 "name": "raid_bdev1", 00:36:31.096 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:31.096 "strip_size_kb": 0, 00:36:31.096 "state": "online", 00:36:31.096 "raid_level": "raid1", 00:36:31.096 "superblock": true, 00:36:31.096 "num_base_bdevs": 4, 00:36:31.096 "num_base_bdevs_discovered": 3, 00:36:31.096 "num_base_bdevs_operational": 3, 00:36:31.096 "base_bdevs_list": [ 00:36:31.096 { 00:36:31.096 "name": null, 00:36:31.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:31.096 "is_configured": false, 00:36:31.096 "data_offset": 2048, 00:36:31.096 "data_size": 63488 00:36:31.096 }, 00:36:31.096 { 00:36:31.096 "name": "pt2", 00:36:31.096 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:31.096 "is_configured": true, 00:36:31.096 "data_offset": 2048, 00:36:31.096 "data_size": 63488 00:36:31.096 }, 00:36:31.096 { 00:36:31.096 "name": "pt3", 00:36:31.096 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:31.096 "is_configured": true, 00:36:31.096 "data_offset": 2048, 00:36:31.096 "data_size": 63488 00:36:31.096 }, 00:36:31.096 { 00:36:31.096 "name": "pt4", 00:36:31.096 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:31.096 "is_configured": true, 00:36:31.096 "data_offset": 2048, 00:36:31.096 "data_size": 63488 00:36:31.096 } 00:36:31.096 ] 00:36:31.096 }' 00:36:31.096 19:29:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:31.096 19:29:46 -- common/autotest_common.sh@10 -- # set +x 00:36:31.662 19:29:47 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:31.920 [2024-04-18 19:29:47.774120] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:31.920 [2024-04-18 19:29:47.774333] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:31.920 [2024-04-18 19:29:47.774536] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:31.920 [2024-04-18 19:29:47.774725] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:31.920 [2024-04-18 19:29:47.774809] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:36:31.920 19:29:47 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.920 19:29:47 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:36:32.178 19:29:48 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:36:32.178 19:29:48 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:36:32.178 19:29:48 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:36:32.178 19:29:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:36:32.178 19:29:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:32.437 19:29:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:36:32.437 19:29:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:36:32.437 19:29:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:36:32.695 19:29:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:36:32.695 19:29:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:36:32.695 19:29:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:36:32.962 19:29:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:36:32.962 19:29:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:36:32.962 19:29:48 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:36:32.962 19:29:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:36:32.962 19:29:48 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:33.233 [2024-04-18 19:29:48.910394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:33.233 [2024-04-18 19:29:48.910633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:33.233 [2024-04-18 19:29:48.910753] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:36:33.233 [2024-04-18 19:29:48.910872] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:33.233 [2024-04-18 19:29:48.913463] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:33.233 [2024-04-18 19:29:48.913649] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:33.233 [2024-04-18 19:29:48.913830] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:33.233 [2024-04-18 19:29:48.913972] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:33.233 pt2 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:33.233 19:29:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.491 19:29:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:33.491 "name": "raid_bdev1", 00:36:33.491 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:33.491 "strip_size_kb": 0, 00:36:33.491 "state": "configuring", 00:36:33.491 "raid_level": "raid1", 00:36:33.491 "superblock": true, 00:36:33.491 "num_base_bdevs": 4, 00:36:33.491 "num_base_bdevs_discovered": 1, 00:36:33.491 "num_base_bdevs_operational": 3, 00:36:33.491 "base_bdevs_list": [ 00:36:33.491 { 00:36:33.491 "name": null, 00:36:33.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:33.491 "is_configured": false, 00:36:33.491 "data_offset": 2048, 00:36:33.491 "data_size": 63488 00:36:33.491 }, 00:36:33.491 { 00:36:33.491 "name": "pt2", 00:36:33.491 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:33.491 "is_configured": true, 00:36:33.491 "data_offset": 2048, 00:36:33.491 "data_size": 63488 00:36:33.491 }, 00:36:33.491 { 00:36:33.491 "name": null, 00:36:33.491 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:33.491 "is_configured": false, 00:36:33.491 "data_offset": 2048, 00:36:33.491 "data_size": 63488 00:36:33.491 }, 00:36:33.491 { 00:36:33.491 "name": null, 00:36:33.491 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:33.491 "is_configured": false, 00:36:33.491 "data_offset": 2048, 00:36:33.491 "data_size": 63488 00:36:33.491 } 00:36:33.491 ] 00:36:33.491 }' 00:36:33.491 19:29:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:33.491 19:29:49 -- common/autotest_common.sh@10 -- # set +x 00:36:34.057 19:29:49 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:36:34.057 19:29:49 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:36:34.057 19:29:49 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:34.314 [2024-04-18 19:29:50.178822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:34.314 [2024-04-18 19:29:50.179080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:34.314 [2024-04-18 19:29:50.179155] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:36:34.314 [2024-04-18 19:29:50.179290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:34.314 [2024-04-18 19:29:50.179836] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:34.314 [2024-04-18 19:29:50.179997] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:34.314 [2024-04-18 19:29:50.180215] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:36:34.314 [2024-04-18 19:29:50.180326] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:34.314 pt3 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.314 19:29:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.573 19:29:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:34.573 "name": "raid_bdev1", 00:36:34.573 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:34.573 "strip_size_kb": 0, 00:36:34.573 "state": "configuring", 00:36:34.573 "raid_level": "raid1", 00:36:34.573 "superblock": true, 00:36:34.573 "num_base_bdevs": 4, 00:36:34.573 "num_base_bdevs_discovered": 2, 00:36:34.573 "num_base_bdevs_operational": 3, 00:36:34.573 "base_bdevs_list": [ 00:36:34.573 { 00:36:34.573 "name": null, 00:36:34.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.573 "is_configured": false, 00:36:34.573 "data_offset": 2048, 00:36:34.573 "data_size": 63488 00:36:34.573 }, 00:36:34.573 { 00:36:34.573 "name": "pt2", 00:36:34.573 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:34.573 "is_configured": true, 00:36:34.573 "data_offset": 2048, 00:36:34.573 "data_size": 63488 00:36:34.573 }, 00:36:34.573 { 00:36:34.573 "name": "pt3", 00:36:34.573 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:34.573 "is_configured": true, 00:36:34.573 "data_offset": 2048, 00:36:34.573 "data_size": 63488 00:36:34.573 }, 00:36:34.573 { 00:36:34.573 "name": null, 00:36:34.573 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:34.573 "is_configured": false, 00:36:34.573 "data_offset": 2048, 00:36:34.573 "data_size": 63488 00:36:34.573 } 00:36:34.573 ] 00:36:34.573 }' 00:36:34.573 19:29:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:34.573 19:29:50 -- common/autotest_common.sh@10 -- # set +x 00:36:35.140 19:29:51 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:36:35.141 19:29:51 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:36:35.141 19:29:51 -- bdev/bdev_raid.sh@462 -- # i=3 00:36:35.141 19:29:51 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:35.406 [2024-04-18 19:29:51.259051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:35.406 [2024-04-18 19:29:51.259297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:35.406 [2024-04-18 19:29:51.259434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:36:35.406 [2024-04-18 19:29:51.259550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:35.406 [2024-04-18 19:29:51.260083] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:35.406 [2024-04-18 19:29:51.260220] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:35.406 [2024-04-18 19:29:51.260408] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:36:35.406 [2024-04-18 19:29:51.260503] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:35.406 [2024-04-18 19:29:51.260665] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:36:35.406 [2024-04-18 19:29:51.260745] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:35.406 [2024-04-18 19:29:51.260921] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:36:35.406 [2024-04-18 19:29:51.261345] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:36:35.406 [2024-04-18 19:29:51.261451] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:36:35.406 [2024-04-18 19:29:51.261668] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:35.406 pt4 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:35.406 19:29:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:35.679 19:29:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:35.679 "name": "raid_bdev1", 00:36:35.679 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:35.679 "strip_size_kb": 0, 00:36:35.679 "state": "online", 00:36:35.679 "raid_level": "raid1", 00:36:35.679 "superblock": true, 00:36:35.679 "num_base_bdevs": 4, 00:36:35.679 "num_base_bdevs_discovered": 3, 00:36:35.679 "num_base_bdevs_operational": 3, 00:36:35.679 "base_bdevs_list": [ 00:36:35.679 { 00:36:35.679 "name": null, 00:36:35.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:35.679 "is_configured": false, 00:36:35.679 "data_offset": 2048, 00:36:35.679 "data_size": 63488 00:36:35.679 }, 00:36:35.679 { 00:36:35.679 "name": "pt2", 00:36:35.679 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:35.679 "is_configured": true, 00:36:35.679 "data_offset": 2048, 00:36:35.679 "data_size": 63488 00:36:35.679 }, 00:36:35.679 { 00:36:35.679 "name": "pt3", 00:36:35.679 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:35.679 "is_configured": true, 00:36:35.679 "data_offset": 2048, 00:36:35.679 "data_size": 63488 00:36:35.679 }, 00:36:35.679 { 00:36:35.679 "name": "pt4", 00:36:35.679 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:35.679 "is_configured": true, 00:36:35.679 "data_offset": 2048, 00:36:35.679 "data_size": 63488 00:36:35.679 } 00:36:35.679 ] 00:36:35.679 }' 00:36:35.679 19:29:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:35.679 19:29:51 -- common/autotest_common.sh@10 -- # set +x 00:36:36.614 19:29:52 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:36:36.614 19:29:52 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:36.614 [2024-04-18 19:29:52.459722] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:36.614 [2024-04-18 19:29:52.459973] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:36.614 [2024-04-18 19:29:52.460129] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:36.614 [2024-04-18 19:29:52.460237] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:36.614 [2024-04-18 19:29:52.460393] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:36:36.614 19:29:52 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.614 19:29:52 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:36:36.873 19:29:52 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:36:36.873 19:29:52 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:36:36.873 19:29:52 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:37.441 [2024-04-18 19:29:53.067903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:37.441 [2024-04-18 19:29:53.068224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:37.441 [2024-04-18 19:29:53.068351] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:36:37.441 [2024-04-18 19:29:53.068442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:37.441 [2024-04-18 19:29:53.070904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:37.441 [2024-04-18 19:29:53.071091] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:37.441 [2024-04-18 19:29:53.071297] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:36:37.441 [2024-04-18 19:29:53.071438] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:37.441 pt1 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:37.441 "name": "raid_bdev1", 00:36:37.441 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:37.441 "strip_size_kb": 0, 00:36:37.441 "state": "configuring", 00:36:37.441 "raid_level": "raid1", 00:36:37.441 "superblock": true, 00:36:37.441 "num_base_bdevs": 4, 00:36:37.441 "num_base_bdevs_discovered": 1, 00:36:37.441 "num_base_bdevs_operational": 4, 00:36:37.441 "base_bdevs_list": [ 00:36:37.441 { 00:36:37.441 "name": "pt1", 00:36:37.441 "uuid": "e2533fa5-6f6d-520c-a46a-a10a557731d6", 00:36:37.441 "is_configured": true, 00:36:37.441 "data_offset": 2048, 00:36:37.441 "data_size": 63488 00:36:37.441 }, 00:36:37.441 { 00:36:37.441 "name": null, 00:36:37.441 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:37.441 "is_configured": false, 00:36:37.441 "data_offset": 2048, 00:36:37.441 "data_size": 63488 00:36:37.441 }, 00:36:37.441 { 00:36:37.441 "name": null, 00:36:37.441 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:37.441 "is_configured": false, 00:36:37.441 "data_offset": 2048, 00:36:37.441 "data_size": 63488 00:36:37.441 }, 00:36:37.441 { 00:36:37.441 "name": null, 00:36:37.441 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:37.441 "is_configured": false, 00:36:37.441 "data_offset": 2048, 00:36:37.441 "data_size": 63488 00:36:37.441 } 00:36:37.441 ] 00:36:37.441 }' 00:36:37.441 19:29:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:37.441 19:29:53 -- common/autotest_common.sh@10 -- # set +x 00:36:38.375 19:29:53 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:36:38.375 19:29:53 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:36:38.375 19:29:53 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:38.375 19:29:54 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:36:38.375 19:29:54 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:36:38.375 19:29:54 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:36:38.633 19:29:54 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:36:38.633 19:29:54 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:36:38.633 19:29:54 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:36:38.891 19:29:54 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:36:38.891 19:29:54 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:36:38.891 19:29:54 -- bdev/bdev_raid.sh@489 -- # i=3 00:36:38.891 19:29:54 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:39.150 [2024-04-18 19:29:54.984326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:39.150 [2024-04-18 19:29:54.984990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.150 [2024-04-18 19:29:54.985061] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:36:39.150 [2024-04-18 19:29:54.985182] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.150 [2024-04-18 19:29:54.985720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.150 [2024-04-18 19:29:54.985885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:39.150 [2024-04-18 19:29:54.986123] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:36:39.150 [2024-04-18 19:29:54.986220] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:39.150 [2024-04-18 19:29:54.986322] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:39.150 [2024-04-18 19:29:54.986369] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:36:39.150 [2024-04-18 19:29:54.986564] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:39.150 pt4 00:36:39.150 19:29:54 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:36:39.150 19:29:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:39.150 19:29:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:39.150 19:29:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:39.150 19:29:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:39.150 19:29:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:39.150 19:29:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:39.150 19:29:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:39.150 19:29:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:39.150 19:29:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:39.150 19:29:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:39.150 19:29:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.409 19:29:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:39.409 "name": "raid_bdev1", 00:36:39.409 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:39.409 "strip_size_kb": 0, 00:36:39.409 "state": "configuring", 00:36:39.409 "raid_level": "raid1", 00:36:39.409 "superblock": true, 00:36:39.409 "num_base_bdevs": 4, 00:36:39.409 "num_base_bdevs_discovered": 1, 00:36:39.409 "num_base_bdevs_operational": 3, 00:36:39.409 "base_bdevs_list": [ 00:36:39.409 { 00:36:39.409 "name": null, 00:36:39.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.409 "is_configured": false, 00:36:39.409 "data_offset": 2048, 00:36:39.409 "data_size": 63488 00:36:39.409 }, 00:36:39.409 { 00:36:39.409 "name": null, 00:36:39.409 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:39.409 "is_configured": false, 00:36:39.409 "data_offset": 2048, 00:36:39.409 "data_size": 63488 00:36:39.409 }, 00:36:39.409 { 00:36:39.409 "name": null, 00:36:39.409 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:39.409 "is_configured": false, 00:36:39.409 "data_offset": 2048, 00:36:39.409 "data_size": 63488 00:36:39.409 }, 00:36:39.409 { 00:36:39.409 "name": "pt4", 00:36:39.409 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:39.409 "is_configured": true, 00:36:39.409 "data_offset": 2048, 00:36:39.409 "data_size": 63488 00:36:39.409 } 00:36:39.409 ] 00:36:39.409 }' 00:36:39.409 19:29:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:39.409 19:29:55 -- common/autotest_common.sh@10 -- # set +x 00:36:39.976 19:29:55 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:36:39.976 19:29:55 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:36:39.976 19:29:55 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:40.234 [2024-04-18 19:29:56.156577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:40.234 [2024-04-18 19:29:56.156871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:40.234 [2024-04-18 19:29:56.157051] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:36:40.234 [2024-04-18 19:29:56.157165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:40.234 [2024-04-18 19:29:56.157709] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:40.234 [2024-04-18 19:29:56.157882] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:40.234 [2024-04-18 19:29:56.158088] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:40.234 [2024-04-18 19:29:56.158195] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:40.234 pt2 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:40.492 [2024-04-18 19:29:56.372640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:40.492 [2024-04-18 19:29:56.372932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:40.492 [2024-04-18 19:29:56.372999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:36:40.492 [2024-04-18 19:29:56.373213] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:40.492 [2024-04-18 19:29:56.373743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:40.492 [2024-04-18 19:29:56.373919] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:40.492 [2024-04-18 19:29:56.374122] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:36:40.492 [2024-04-18 19:29:56.374214] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:40.492 [2024-04-18 19:29:56.374418] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:36:40.492 [2024-04-18 19:29:56.374511] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:40.492 [2024-04-18 19:29:56.374661] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:36:40.492 [2024-04-18 19:29:56.375076] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:36:40.492 [2024-04-18 19:29:56.375173] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:36:40.492 [2024-04-18 19:29:56.375418] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:40.492 pt3 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.492 19:29:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:41.060 19:29:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:41.060 "name": "raid_bdev1", 00:36:41.060 "uuid": "543a5466-0301-4c9a-9bd1-b849f9e8aca3", 00:36:41.060 "strip_size_kb": 0, 00:36:41.060 "state": "online", 00:36:41.060 "raid_level": "raid1", 00:36:41.060 "superblock": true, 00:36:41.060 "num_base_bdevs": 4, 00:36:41.060 "num_base_bdevs_discovered": 3, 00:36:41.060 "num_base_bdevs_operational": 3, 00:36:41.060 "base_bdevs_list": [ 00:36:41.060 { 00:36:41.060 "name": null, 00:36:41.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:41.060 "is_configured": false, 00:36:41.060 "data_offset": 2048, 00:36:41.060 "data_size": 63488 00:36:41.060 }, 00:36:41.060 { 00:36:41.060 "name": "pt2", 00:36:41.060 "uuid": "b0185a8e-6b50-5507-8ae5-0a0f1b4faaf2", 00:36:41.060 "is_configured": true, 00:36:41.060 "data_offset": 2048, 00:36:41.060 "data_size": 63488 00:36:41.060 }, 00:36:41.060 { 00:36:41.060 "name": "pt3", 00:36:41.060 "uuid": "a3a66150-7a7e-5dc1-bc97-8d4a82461a4f", 00:36:41.060 "is_configured": true, 00:36:41.060 "data_offset": 2048, 00:36:41.060 "data_size": 63488 00:36:41.060 }, 00:36:41.060 { 00:36:41.060 "name": "pt4", 00:36:41.060 "uuid": "9569ce74-4c78-59ee-8c3f-1c82b29f3d8b", 00:36:41.060 "is_configured": true, 00:36:41.060 "data_offset": 2048, 00:36:41.060 "data_size": 63488 00:36:41.060 } 00:36:41.060 ] 00:36:41.060 }' 00:36:41.060 19:29:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:41.060 19:29:56 -- common/autotest_common.sh@10 -- # set +x 00:36:41.630 19:29:57 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:41.630 19:29:57 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:36:41.888 [2024-04-18 19:29:57.653182] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:41.888 19:29:57 -- bdev/bdev_raid.sh@506 -- # '[' 543a5466-0301-4c9a-9bd1-b849f9e8aca3 '!=' 543a5466-0301-4c9a-9bd1-b849f9e8aca3 ']' 00:36:41.888 19:29:57 -- bdev/bdev_raid.sh@511 -- # killprocess 131825 00:36:41.888 19:29:57 -- common/autotest_common.sh@936 -- # '[' -z 131825 ']' 00:36:41.888 19:29:57 -- common/autotest_common.sh@940 -- # kill -0 131825 00:36:41.888 19:29:57 -- common/autotest_common.sh@941 -- # uname 00:36:41.888 19:29:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:41.888 19:29:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131825 00:36:41.888 killing process with pid 131825 00:36:41.888 19:29:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:41.888 19:29:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:41.888 19:29:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131825' 00:36:41.888 19:29:57 -- common/autotest_common.sh@955 -- # kill 131825 00:36:41.888 19:29:57 -- common/autotest_common.sh@960 -- # wait 131825 00:36:41.888 [2024-04-18 19:29:57.694162] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:41.888 [2024-04-18 19:29:57.694244] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:41.888 [2024-04-18 19:29:57.694330] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:41.888 [2024-04-18 19:29:57.694340] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:36:42.454 [2024-04-18 19:29:58.106760] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:43.827 ************************************ 00:36:43.827 END TEST raid_superblock_test 00:36:43.827 ************************************ 00:36:43.827 19:29:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:36:43.827 00:36:43.827 real 0m24.647s 00:36:43.827 user 0m44.735s 00:36:43.827 sys 0m3.174s 00:36:43.827 19:29:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:43.828 19:29:59 -- common/autotest_common.sh@10 -- # set +x 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:36:43.828 19:29:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:36:43.828 19:29:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:43.828 19:29:59 -- common/autotest_common.sh@10 -- # set +x 00:36:43.828 ************************************ 00:36:43.828 START TEST raid_rebuild_test 00:36:43.828 ************************************ 00:36:43.828 19:29:59 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 false false 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@544 -- # raid_pid=132576 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132576 /var/tmp/spdk-raid.sock 00:36:43.828 19:29:59 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:43.828 19:29:59 -- common/autotest_common.sh@817 -- # '[' -z 132576 ']' 00:36:43.828 19:29:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:43.828 19:29:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:43.828 19:29:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:43.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:43.828 19:29:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:43.828 19:29:59 -- common/autotest_common.sh@10 -- # set +x 00:36:43.828 [2024-04-18 19:29:59.687800] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:36:43.828 [2024-04-18 19:29:59.688214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132576 ] 00:36:43.828 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:43.828 Zero copy mechanism will not be used. 00:36:44.086 [2024-04-18 19:29:59.860751] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.345 [2024-04-18 19:30:00.102018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.603 [2024-04-18 19:30:00.353747] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:44.861 19:30:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:44.861 19:30:00 -- common/autotest_common.sh@850 -- # return 0 00:36:44.861 19:30:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:36:44.861 19:30:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:36:44.861 19:30:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:45.119 BaseBdev1 00:36:45.119 19:30:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:36:45.119 19:30:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:36:45.119 19:30:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:45.377 BaseBdev2 00:36:45.377 19:30:01 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:36:45.634 spare_malloc 00:36:45.634 19:30:01 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:46.200 spare_delay 00:36:46.200 19:30:01 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:46.200 [2024-04-18 19:30:02.056486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:46.200 [2024-04-18 19:30:02.056823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:46.200 [2024-04-18 19:30:02.056952] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:46.200 [2024-04-18 19:30:02.057107] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:46.200 [2024-04-18 19:30:02.060502] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:46.200 [2024-04-18 19:30:02.060724] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:46.200 spare 00:36:46.200 19:30:02 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:36:46.459 [2024-04-18 19:30:02.341177] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:46.459 [2024-04-18 19:30:02.343727] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:46.459 [2024-04-18 19:30:02.343975] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:36:46.459 [2024-04-18 19:30:02.344020] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:36:46.459 [2024-04-18 19:30:02.344272] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:36:46.459 [2024-04-18 19:30:02.344719] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:36:46.459 [2024-04-18 19:30:02.344836] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:36:46.459 [2024-04-18 19:30:02.345175] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:46.459 19:30:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:46.718 19:30:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:46.718 "name": "raid_bdev1", 00:36:46.718 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:36:46.718 "strip_size_kb": 0, 00:36:46.718 "state": "online", 00:36:46.718 "raid_level": "raid1", 00:36:46.718 "superblock": false, 00:36:46.718 "num_base_bdevs": 2, 00:36:46.718 "num_base_bdevs_discovered": 2, 00:36:46.718 "num_base_bdevs_operational": 2, 00:36:46.718 "base_bdevs_list": [ 00:36:46.718 { 00:36:46.718 "name": "BaseBdev1", 00:36:46.718 "uuid": "9ef7e007-e4e4-4652-8e9e-8c7d376f11f7", 00:36:46.718 "is_configured": true, 00:36:46.718 "data_offset": 0, 00:36:46.718 "data_size": 65536 00:36:46.718 }, 00:36:46.718 { 00:36:46.718 "name": "BaseBdev2", 00:36:46.718 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:36:46.718 "is_configured": true, 00:36:46.718 "data_offset": 0, 00:36:46.718 "data_size": 65536 00:36:46.718 } 00:36:46.718 ] 00:36:46.718 }' 00:36:46.718 19:30:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:46.718 19:30:02 -- common/autotest_common.sh@10 -- # set +x 00:36:47.653 19:30:03 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:47.653 19:30:03 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:36:47.911 [2024-04-18 19:30:03.585749] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:47.911 19:30:03 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:36:47.911 19:30:03 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.911 19:30:03 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:48.170 19:30:03 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:36:48.170 19:30:03 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:36:48.170 19:30:03 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:36:48.170 19:30:03 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:36:48.170 19:30:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:48.170 19:30:03 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:36:48.170 19:30:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:48.170 19:30:03 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:36:48.170 19:30:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:48.170 19:30:03 -- bdev/nbd_common.sh@12 -- # local i 00:36:48.170 19:30:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:48.170 19:30:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:48.170 19:30:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:48.427 [2024-04-18 19:30:04.129664] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:36:48.427 /dev/nbd0 00:36:48.427 19:30:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:48.427 19:30:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:48.427 19:30:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:36:48.427 19:30:04 -- common/autotest_common.sh@855 -- # local i 00:36:48.427 19:30:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:36:48.427 19:30:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:36:48.427 19:30:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:36:48.427 19:30:04 -- common/autotest_common.sh@859 -- # break 00:36:48.428 19:30:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:36:48.428 19:30:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:36:48.428 19:30:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:48.428 1+0 records in 00:36:48.428 1+0 records out 00:36:48.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445764 s, 9.2 MB/s 00:36:48.428 19:30:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:48.428 19:30:04 -- common/autotest_common.sh@872 -- # size=4096 00:36:48.428 19:30:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:48.428 19:30:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:36:48.428 19:30:04 -- common/autotest_common.sh@875 -- # return 0 00:36:48.428 19:30:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:48.428 19:30:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:48.428 19:30:04 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:36:48.428 19:30:04 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:36:48.428 19:30:04 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:36:53.693 65536+0 records in 00:36:53.693 65536+0 records out 00:36:53.693 33554432 bytes (34 MB, 32 MiB) copied, 4.47508 s, 7.5 MB/s 00:36:53.693 19:30:08 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@51 -- # local i 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:53.693 [2024-04-18 19:30:08.973459] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@41 -- # break 00:36:53.693 19:30:08 -- bdev/nbd_common.sh@45 -- # return 0 00:36:53.693 19:30:08 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:53.693 [2024-04-18 19:30:09.181295] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:53.693 19:30:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:53.694 "name": "raid_bdev1", 00:36:53.694 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:36:53.694 "strip_size_kb": 0, 00:36:53.694 "state": "online", 00:36:53.694 "raid_level": "raid1", 00:36:53.694 "superblock": false, 00:36:53.694 "num_base_bdevs": 2, 00:36:53.694 "num_base_bdevs_discovered": 1, 00:36:53.694 "num_base_bdevs_operational": 1, 00:36:53.694 "base_bdevs_list": [ 00:36:53.694 { 00:36:53.694 "name": null, 00:36:53.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.694 "is_configured": false, 00:36:53.694 "data_offset": 0, 00:36:53.694 "data_size": 65536 00:36:53.694 }, 00:36:53.694 { 00:36:53.694 "name": "BaseBdev2", 00:36:53.694 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:36:53.694 "is_configured": true, 00:36:53.694 "data_offset": 0, 00:36:53.694 "data_size": 65536 00:36:53.694 } 00:36:53.694 ] 00:36:53.694 }' 00:36:53.694 19:30:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:53.694 19:30:09 -- common/autotest_common.sh@10 -- # set +x 00:36:54.644 19:30:10 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:54.644 [2024-04-18 19:30:10.502261] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:36:54.644 [2024-04-18 19:30:10.502544] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:54.644 [2024-04-18 19:30:10.523114] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:36:54.644 [2024-04-18 19:30:10.525496] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:54.644 19:30:10 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:56.019 "name": "raid_bdev1", 00:36:56.019 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:36:56.019 "strip_size_kb": 0, 00:36:56.019 "state": "online", 00:36:56.019 "raid_level": "raid1", 00:36:56.019 "superblock": false, 00:36:56.019 "num_base_bdevs": 2, 00:36:56.019 "num_base_bdevs_discovered": 2, 00:36:56.019 "num_base_bdevs_operational": 2, 00:36:56.019 "process": { 00:36:56.019 "type": "rebuild", 00:36:56.019 "target": "spare", 00:36:56.019 "progress": { 00:36:56.019 "blocks": 24576, 00:36:56.019 "percent": 37 00:36:56.019 } 00:36:56.019 }, 00:36:56.019 "base_bdevs_list": [ 00:36:56.019 { 00:36:56.019 "name": "spare", 00:36:56.019 "uuid": "169a605f-3eb8-5eac-969b-90bf5d676b52", 00:36:56.019 "is_configured": true, 00:36:56.019 "data_offset": 0, 00:36:56.019 "data_size": 65536 00:36:56.019 }, 00:36:56.019 { 00:36:56.019 "name": "BaseBdev2", 00:36:56.019 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:36:56.019 "is_configured": true, 00:36:56.019 "data_offset": 0, 00:36:56.019 "data_size": 65536 00:36:56.019 } 00:36:56.019 ] 00:36:56.019 }' 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:56.019 19:30:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:56.020 19:30:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:56.020 19:30:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:56.020 19:30:11 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:56.277 [2024-04-18 19:30:12.115236] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:56.278 [2024-04-18 19:30:12.135163] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:56.278 [2024-04-18 19:30:12.135377] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.278 19:30:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:56.537 19:30:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:56.537 "name": "raid_bdev1", 00:36:56.537 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:36:56.537 "strip_size_kb": 0, 00:36:56.537 "state": "online", 00:36:56.537 "raid_level": "raid1", 00:36:56.537 "superblock": false, 00:36:56.537 "num_base_bdevs": 2, 00:36:56.537 "num_base_bdevs_discovered": 1, 00:36:56.537 "num_base_bdevs_operational": 1, 00:36:56.537 "base_bdevs_list": [ 00:36:56.537 { 00:36:56.537 "name": null, 00:36:56.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.537 "is_configured": false, 00:36:56.537 "data_offset": 0, 00:36:56.537 "data_size": 65536 00:36:56.537 }, 00:36:56.537 { 00:36:56.537 "name": "BaseBdev2", 00:36:56.537 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:36:56.537 "is_configured": true, 00:36:56.537 "data_offset": 0, 00:36:56.537 "data_size": 65536 00:36:56.537 } 00:36:56.537 ] 00:36:56.537 }' 00:36:56.537 19:30:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:56.537 19:30:12 -- common/autotest_common.sh@10 -- # set +x 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:57.470 "name": "raid_bdev1", 00:36:57.470 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:36:57.470 "strip_size_kb": 0, 00:36:57.470 "state": "online", 00:36:57.470 "raid_level": "raid1", 00:36:57.470 "superblock": false, 00:36:57.470 "num_base_bdevs": 2, 00:36:57.470 "num_base_bdevs_discovered": 1, 00:36:57.470 "num_base_bdevs_operational": 1, 00:36:57.470 "base_bdevs_list": [ 00:36:57.470 { 00:36:57.470 "name": null, 00:36:57.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.470 "is_configured": false, 00:36:57.470 "data_offset": 0, 00:36:57.470 "data_size": 65536 00:36:57.470 }, 00:36:57.470 { 00:36:57.470 "name": "BaseBdev2", 00:36:57.470 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:36:57.470 "is_configured": true, 00:36:57.470 "data_offset": 0, 00:36:57.470 "data_size": 65536 00:36:57.470 } 00:36:57.470 ] 00:36:57.470 }' 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:36:57.470 19:30:13 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:57.727 [2024-04-18 19:30:13.607390] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:36:57.727 [2024-04-18 19:30:13.607681] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:57.727 [2024-04-18 19:30:13.624961] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:36:57.727 [2024-04-18 19:30:13.627248] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:57.727 19:30:13 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:59.103 "name": "raid_bdev1", 00:36:59.103 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:36:59.103 "strip_size_kb": 0, 00:36:59.103 "state": "online", 00:36:59.103 "raid_level": "raid1", 00:36:59.103 "superblock": false, 00:36:59.103 "num_base_bdevs": 2, 00:36:59.103 "num_base_bdevs_discovered": 2, 00:36:59.103 "num_base_bdevs_operational": 2, 00:36:59.103 "process": { 00:36:59.103 "type": "rebuild", 00:36:59.103 "target": "spare", 00:36:59.103 "progress": { 00:36:59.103 "blocks": 24576, 00:36:59.103 "percent": 37 00:36:59.103 } 00:36:59.103 }, 00:36:59.103 "base_bdevs_list": [ 00:36:59.103 { 00:36:59.103 "name": "spare", 00:36:59.103 "uuid": "169a605f-3eb8-5eac-969b-90bf5d676b52", 00:36:59.103 "is_configured": true, 00:36:59.103 "data_offset": 0, 00:36:59.103 "data_size": 65536 00:36:59.103 }, 00:36:59.103 { 00:36:59.103 "name": "BaseBdev2", 00:36:59.103 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:36:59.103 "is_configured": true, 00:36:59.103 "data_offset": 0, 00:36:59.103 "data_size": 65536 00:36:59.103 } 00:36:59.103 ] 00:36:59.103 }' 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:59.103 19:30:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@657 -- # local timeout=460 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:59.362 19:30:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:59.620 19:30:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:36:59.620 "name": "raid_bdev1", 00:36:59.620 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:36:59.620 "strip_size_kb": 0, 00:36:59.620 "state": "online", 00:36:59.620 "raid_level": "raid1", 00:36:59.620 "superblock": false, 00:36:59.620 "num_base_bdevs": 2, 00:36:59.620 "num_base_bdevs_discovered": 2, 00:36:59.620 "num_base_bdevs_operational": 2, 00:36:59.620 "process": { 00:36:59.620 "type": "rebuild", 00:36:59.620 "target": "spare", 00:36:59.620 "progress": { 00:36:59.620 "blocks": 32768, 00:36:59.620 "percent": 50 00:36:59.620 } 00:36:59.620 }, 00:36:59.620 "base_bdevs_list": [ 00:36:59.620 { 00:36:59.620 "name": "spare", 00:36:59.620 "uuid": "169a605f-3eb8-5eac-969b-90bf5d676b52", 00:36:59.620 "is_configured": true, 00:36:59.620 "data_offset": 0, 00:36:59.620 "data_size": 65536 00:36:59.620 }, 00:36:59.620 { 00:36:59.620 "name": "BaseBdev2", 00:36:59.620 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:36:59.620 "is_configured": true, 00:36:59.620 "data_offset": 0, 00:36:59.620 "data_size": 65536 00:36:59.620 } 00:36:59.620 ] 00:36:59.620 }' 00:36:59.620 19:30:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:36:59.620 19:30:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:59.620 19:30:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:36:59.620 19:30:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:36:59.620 19:30:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:00.600 19:30:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:00.600 19:30:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:00.600 19:30:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:00.600 19:30:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:00.600 19:30:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:00.600 19:30:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:00.600 19:30:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:00.600 19:30:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:00.858 19:30:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:00.858 "name": "raid_bdev1", 00:37:00.858 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:37:00.858 "strip_size_kb": 0, 00:37:00.858 "state": "online", 00:37:00.858 "raid_level": "raid1", 00:37:00.858 "superblock": false, 00:37:00.858 "num_base_bdevs": 2, 00:37:00.858 "num_base_bdevs_discovered": 2, 00:37:00.858 "num_base_bdevs_operational": 2, 00:37:00.858 "process": { 00:37:00.858 "type": "rebuild", 00:37:00.858 "target": "spare", 00:37:00.858 "progress": { 00:37:00.858 "blocks": 59392, 00:37:00.858 "percent": 90 00:37:00.858 } 00:37:00.858 }, 00:37:00.858 "base_bdevs_list": [ 00:37:00.858 { 00:37:00.858 "name": "spare", 00:37:00.858 "uuid": "169a605f-3eb8-5eac-969b-90bf5d676b52", 00:37:00.858 "is_configured": true, 00:37:00.858 "data_offset": 0, 00:37:00.858 "data_size": 65536 00:37:00.858 }, 00:37:00.858 { 00:37:00.858 "name": "BaseBdev2", 00:37:00.858 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:37:00.858 "is_configured": true, 00:37:00.858 "data_offset": 0, 00:37:00.858 "data_size": 65536 00:37:00.858 } 00:37:00.858 ] 00:37:00.858 }' 00:37:00.858 19:30:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:00.858 19:30:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:00.858 19:30:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:00.858 19:30:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:00.858 19:30:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:01.116 [2024-04-18 19:30:16.847192] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:01.116 [2024-04-18 19:30:16.847523] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:01.116 [2024-04-18 19:30:16.847903] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:02.051 19:30:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:02.051 19:30:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:02.051 19:30:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:02.051 19:30:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:02.051 19:30:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:02.051 19:30:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:02.051 19:30:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.051 19:30:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.311 19:30:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:02.311 "name": "raid_bdev1", 00:37:02.311 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:37:02.311 "strip_size_kb": 0, 00:37:02.311 "state": "online", 00:37:02.311 "raid_level": "raid1", 00:37:02.311 "superblock": false, 00:37:02.311 "num_base_bdevs": 2, 00:37:02.311 "num_base_bdevs_discovered": 2, 00:37:02.311 "num_base_bdevs_operational": 2, 00:37:02.311 "base_bdevs_list": [ 00:37:02.311 { 00:37:02.311 "name": "spare", 00:37:02.311 "uuid": "169a605f-3eb8-5eac-969b-90bf5d676b52", 00:37:02.311 "is_configured": true, 00:37:02.311 "data_offset": 0, 00:37:02.311 "data_size": 65536 00:37:02.311 }, 00:37:02.311 { 00:37:02.311 "name": "BaseBdev2", 00:37:02.311 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:37:02.311 "is_configured": true, 00:37:02.311 "data_offset": 0, 00:37:02.311 "data_size": 65536 00:37:02.311 } 00:37:02.311 ] 00:37:02.311 }' 00:37:02.311 19:30:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@660 -- # break 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.311 19:30:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:02.579 "name": "raid_bdev1", 00:37:02.579 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:37:02.579 "strip_size_kb": 0, 00:37:02.579 "state": "online", 00:37:02.579 "raid_level": "raid1", 00:37:02.579 "superblock": false, 00:37:02.579 "num_base_bdevs": 2, 00:37:02.579 "num_base_bdevs_discovered": 2, 00:37:02.579 "num_base_bdevs_operational": 2, 00:37:02.579 "base_bdevs_list": [ 00:37:02.579 { 00:37:02.579 "name": "spare", 00:37:02.579 "uuid": "169a605f-3eb8-5eac-969b-90bf5d676b52", 00:37:02.579 "is_configured": true, 00:37:02.579 "data_offset": 0, 00:37:02.579 "data_size": 65536 00:37:02.579 }, 00:37:02.579 { 00:37:02.579 "name": "BaseBdev2", 00:37:02.579 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:37:02.579 "is_configured": true, 00:37:02.579 "data_offset": 0, 00:37:02.579 "data_size": 65536 00:37:02.579 } 00:37:02.579 ] 00:37:02.579 }' 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.579 19:30:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.837 19:30:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:02.837 "name": "raid_bdev1", 00:37:02.837 "uuid": "e90081db-4781-402c-a0ea-312db32556da", 00:37:02.837 "strip_size_kb": 0, 00:37:02.837 "state": "online", 00:37:02.837 "raid_level": "raid1", 00:37:02.837 "superblock": false, 00:37:02.837 "num_base_bdevs": 2, 00:37:02.837 "num_base_bdevs_discovered": 2, 00:37:02.837 "num_base_bdevs_operational": 2, 00:37:02.837 "base_bdevs_list": [ 00:37:02.837 { 00:37:02.837 "name": "spare", 00:37:02.837 "uuid": "169a605f-3eb8-5eac-969b-90bf5d676b52", 00:37:02.837 "is_configured": true, 00:37:02.837 "data_offset": 0, 00:37:02.837 "data_size": 65536 00:37:02.837 }, 00:37:02.837 { 00:37:02.837 "name": "BaseBdev2", 00:37:02.837 "uuid": "0c08313a-3abd-4ddd-ae6e-08f431f96f35", 00:37:02.837 "is_configured": true, 00:37:02.837 "data_offset": 0, 00:37:02.837 "data_size": 65536 00:37:02.837 } 00:37:02.837 ] 00:37:02.837 }' 00:37:02.837 19:30:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:02.837 19:30:18 -- common/autotest_common.sh@10 -- # set +x 00:37:03.773 19:30:19 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:03.773 [2024-04-18 19:30:19.675018] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:03.773 [2024-04-18 19:30:19.675274] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:03.773 [2024-04-18 19:30:19.675555] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:03.773 [2024-04-18 19:30:19.675715] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:03.773 [2024-04-18 19:30:19.675800] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:37:03.773 19:30:19 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:03.773 19:30:19 -- bdev/bdev_raid.sh@671 -- # jq length 00:37:04.032 19:30:19 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:37:04.032 19:30:19 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:37:04.032 19:30:19 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:04.032 19:30:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:04.032 19:30:19 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:04.032 19:30:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:04.032 19:30:19 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:04.032 19:30:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:04.032 19:30:19 -- bdev/nbd_common.sh@12 -- # local i 00:37:04.032 19:30:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:04.032 19:30:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:04.032 19:30:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:04.290 /dev/nbd0 00:37:04.290 19:30:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:04.549 19:30:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:04.549 19:30:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:37:04.549 19:30:20 -- common/autotest_common.sh@855 -- # local i 00:37:04.549 19:30:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:04.549 19:30:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:04.549 19:30:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:37:04.549 19:30:20 -- common/autotest_common.sh@859 -- # break 00:37:04.549 19:30:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:04.549 19:30:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:04.549 19:30:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:04.549 1+0 records in 00:37:04.549 1+0 records out 00:37:04.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289938 s, 14.1 MB/s 00:37:04.549 19:30:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:04.549 19:30:20 -- common/autotest_common.sh@872 -- # size=4096 00:37:04.549 19:30:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:04.549 19:30:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:04.549 19:30:20 -- common/autotest_common.sh@875 -- # return 0 00:37:04.549 19:30:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:04.549 19:30:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:04.549 19:30:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:37:04.808 /dev/nbd1 00:37:04.808 19:30:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:04.808 19:30:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:04.808 19:30:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:37:04.808 19:30:20 -- common/autotest_common.sh@855 -- # local i 00:37:04.808 19:30:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:04.808 19:30:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:04.808 19:30:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:37:04.808 19:30:20 -- common/autotest_common.sh@859 -- # break 00:37:04.808 19:30:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:04.808 19:30:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:04.808 19:30:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:04.808 1+0 records in 00:37:04.808 1+0 records out 00:37:04.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755004 s, 5.4 MB/s 00:37:04.808 19:30:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:04.808 19:30:20 -- common/autotest_common.sh@872 -- # size=4096 00:37:04.808 19:30:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:04.808 19:30:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:04.808 19:30:20 -- common/autotest_common.sh@875 -- # return 0 00:37:04.808 19:30:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:04.808 19:30:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:04.808 19:30:20 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:37:05.067 19:30:20 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@51 -- # local i 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:05.067 19:30:20 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:37:05.327 19:30:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:37:05.327 19:30:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:05.327 19:30:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:05.327 19:30:21 -- bdev/nbd_common.sh@41 -- # break 00:37:05.327 19:30:21 -- bdev/nbd_common.sh@45 -- # return 0 00:37:05.327 19:30:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:05.327 19:30:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:37:05.585 19:30:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:05.585 19:30:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:05.585 19:30:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:05.585 19:30:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:05.585 19:30:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:05.585 19:30:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:05.585 19:30:21 -- bdev/nbd_common.sh@41 -- # break 00:37:05.585 19:30:21 -- bdev/nbd_common.sh@45 -- # return 0 00:37:05.585 19:30:21 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:37:05.585 19:30:21 -- bdev/bdev_raid.sh@709 -- # killprocess 132576 00:37:05.585 19:30:21 -- common/autotest_common.sh@936 -- # '[' -z 132576 ']' 00:37:05.585 19:30:21 -- common/autotest_common.sh@940 -- # kill -0 132576 00:37:05.585 19:30:21 -- common/autotest_common.sh@941 -- # uname 00:37:05.585 19:30:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:05.585 19:30:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132576 00:37:05.585 19:30:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:37:05.585 19:30:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:37:05.585 19:30:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132576' 00:37:05.585 killing process with pid 132576 00:37:05.585 19:30:21 -- common/autotest_common.sh@955 -- # kill 132576 00:37:05.585 Received shutdown signal, test time was about 60.000000 seconds 00:37:05.585 00:37:05.585 Latency(us) 00:37:05.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.585 =================================================================================================================== 00:37:05.585 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:05.585 [2024-04-18 19:30:21.420044] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:05.585 19:30:21 -- common/autotest_common.sh@960 -- # wait 132576 00:37:05.844 [2024-04-18 19:30:21.765351] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:07.745 ************************************ 00:37:07.745 END TEST raid_rebuild_test 00:37:07.745 ************************************ 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@711 -- # return 0 00:37:07.745 00:37:07.745 real 0m23.663s 00:37:07.745 user 0m32.546s 00:37:07.745 sys 0m4.422s 00:37:07.745 19:30:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:07.745 19:30:23 -- common/autotest_common.sh@10 -- # set +x 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:37:07.745 19:30:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:37:07.745 19:30:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:07.745 19:30:23 -- common/autotest_common.sh@10 -- # set +x 00:37:07.745 ************************************ 00:37:07.745 START TEST raid_rebuild_test_sb 00:37:07.745 ************************************ 00:37:07.745 19:30:23 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 true false 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@544 -- # raid_pid=133173 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:07.745 19:30:23 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133173 /var/tmp/spdk-raid.sock 00:37:07.745 19:30:23 -- common/autotest_common.sh@817 -- # '[' -z 133173 ']' 00:37:07.745 19:30:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:07.745 19:30:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:37:07.745 19:30:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:07.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:07.745 19:30:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:37:07.745 19:30:23 -- common/autotest_common.sh@10 -- # set +x 00:37:07.745 [2024-04-18 19:30:23.437973] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:37:07.745 [2024-04-18 19:30:23.439999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133173 ] 00:37:07.745 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:07.745 Zero copy mechanism will not be used. 00:37:07.745 [2024-04-18 19:30:23.614595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.027 [2024-04-18 19:30:23.878988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:08.333 [2024-04-18 19:30:24.135474] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:08.592 19:30:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:37:08.592 19:30:24 -- common/autotest_common.sh@850 -- # return 0 00:37:08.592 19:30:24 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:08.592 19:30:24 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:37:08.592 19:30:24 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:08.851 BaseBdev1_malloc 00:37:09.110 19:30:24 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:09.369 [2024-04-18 19:30:25.044423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:09.369 [2024-04-18 19:30:25.044681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:09.369 [2024-04-18 19:30:25.044754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:37:09.369 [2024-04-18 19:30:25.044988] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:09.369 [2024-04-18 19:30:25.047639] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:09.369 [2024-04-18 19:30:25.047801] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:09.369 BaseBdev1 00:37:09.369 19:30:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:09.369 19:30:25 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:37:09.369 19:30:25 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:09.627 BaseBdev2_malloc 00:37:09.627 19:30:25 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:09.887 [2024-04-18 19:30:25.681998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:09.887 [2024-04-18 19:30:25.683980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:09.887 [2024-04-18 19:30:25.684067] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:37:09.887 [2024-04-18 19:30:25.684250] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:09.887 [2024-04-18 19:30:25.686734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:09.887 [2024-04-18 19:30:25.686910] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:09.887 BaseBdev2 00:37:09.887 19:30:25 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:37:10.145 spare_malloc 00:37:10.145 19:30:26 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:10.404 spare_delay 00:37:10.404 19:30:26 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:10.663 [2024-04-18 19:30:26.568955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:10.663 [2024-04-18 19:30:26.569253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:10.663 [2024-04-18 19:30:26.569330] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:10.663 [2024-04-18 19:30:26.569561] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:10.663 [2024-04-18 19:30:26.572265] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:10.663 [2024-04-18 19:30:26.572436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:10.663 spare 00:37:10.663 19:30:26 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:37:11.262 [2024-04-18 19:30:26.849302] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:11.262 [2024-04-18 19:30:26.851754] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:11.262 [2024-04-18 19:30:26.852127] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:37:11.262 [2024-04-18 19:30:26.852242] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:11.262 [2024-04-18 19:30:26.852443] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:37:11.262 [2024-04-18 19:30:26.852881] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:37:11.262 [2024-04-18 19:30:26.852998] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:37:11.262 [2024-04-18 19:30:26.853269] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:11.262 19:30:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:11.262 19:30:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:11.262 "name": "raid_bdev1", 00:37:11.262 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:11.262 "strip_size_kb": 0, 00:37:11.262 "state": "online", 00:37:11.262 "raid_level": "raid1", 00:37:11.262 "superblock": true, 00:37:11.262 "num_base_bdevs": 2, 00:37:11.263 "num_base_bdevs_discovered": 2, 00:37:11.263 "num_base_bdevs_operational": 2, 00:37:11.263 "base_bdevs_list": [ 00:37:11.263 { 00:37:11.263 "name": "BaseBdev1", 00:37:11.263 "uuid": "33c388da-f43d-51ab-8d61-aa204858cc49", 00:37:11.263 "is_configured": true, 00:37:11.263 "data_offset": 2048, 00:37:11.263 "data_size": 63488 00:37:11.263 }, 00:37:11.263 { 00:37:11.263 "name": "BaseBdev2", 00:37:11.263 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:11.263 "is_configured": true, 00:37:11.263 "data_offset": 2048, 00:37:11.263 "data_size": 63488 00:37:11.263 } 00:37:11.263 ] 00:37:11.263 }' 00:37:11.263 19:30:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:11.263 19:30:27 -- common/autotest_common.sh@10 -- # set +x 00:37:12.197 19:30:27 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:12.197 19:30:27 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:37:12.456 [2024-04-18 19:30:28.161760] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:12.456 19:30:28 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:37:12.456 19:30:28 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:12.456 19:30:28 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:12.713 19:30:28 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:37:12.713 19:30:28 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:37:12.713 19:30:28 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:37:12.713 19:30:28 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:37:12.713 19:30:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:12.713 19:30:28 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:12.713 19:30:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:12.713 19:30:28 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:12.713 19:30:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:12.713 19:30:28 -- bdev/nbd_common.sh@12 -- # local i 00:37:12.713 19:30:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:12.713 19:30:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:12.713 19:30:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:12.971 [2024-04-18 19:30:28.729647] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:37:12.971 /dev/nbd0 00:37:12.971 19:30:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:12.971 19:30:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:12.971 19:30:28 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:37:12.971 19:30:28 -- common/autotest_common.sh@855 -- # local i 00:37:12.971 19:30:28 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:12.971 19:30:28 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:12.971 19:30:28 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:37:12.971 19:30:28 -- common/autotest_common.sh@859 -- # break 00:37:12.971 19:30:28 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:12.971 19:30:28 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:12.971 19:30:28 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:12.971 1+0 records in 00:37:12.971 1+0 records out 00:37:12.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051532 s, 7.9 MB/s 00:37:12.971 19:30:28 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:12.971 19:30:28 -- common/autotest_common.sh@872 -- # size=4096 00:37:12.971 19:30:28 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:12.971 19:30:28 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:12.971 19:30:28 -- common/autotest_common.sh@875 -- # return 0 00:37:12.971 19:30:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:12.971 19:30:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:12.971 19:30:28 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:37:12.971 19:30:28 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:37:12.971 19:30:28 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:37:18.270 63488+0 records in 00:37:18.270 63488+0 records out 00:37:18.270 32505856 bytes (33 MB, 31 MiB) copied, 5.14222 s, 6.3 MB/s 00:37:18.270 19:30:33 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:37:18.270 19:30:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:18.270 19:30:33 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:18.270 19:30:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:18.270 19:30:33 -- bdev/nbd_common.sh@51 -- # local i 00:37:18.270 19:30:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:18.270 19:30:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:18.528 19:30:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:18.528 19:30:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:18.528 19:30:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:18.528 19:30:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:18.528 19:30:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:18.528 19:30:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:18.528 [2024-04-18 19:30:34.231574] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:18.528 19:30:34 -- bdev/nbd_common.sh@41 -- # break 00:37:18.528 19:30:34 -- bdev/nbd_common.sh@45 -- # return 0 00:37:18.528 19:30:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:18.787 [2024-04-18 19:30:34.471319] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.787 19:30:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.046 19:30:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:19.046 "name": "raid_bdev1", 00:37:19.046 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:19.046 "strip_size_kb": 0, 00:37:19.046 "state": "online", 00:37:19.046 "raid_level": "raid1", 00:37:19.046 "superblock": true, 00:37:19.046 "num_base_bdevs": 2, 00:37:19.046 "num_base_bdevs_discovered": 1, 00:37:19.046 "num_base_bdevs_operational": 1, 00:37:19.046 "base_bdevs_list": [ 00:37:19.046 { 00:37:19.046 "name": null, 00:37:19.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:19.046 "is_configured": false, 00:37:19.046 "data_offset": 2048, 00:37:19.046 "data_size": 63488 00:37:19.046 }, 00:37:19.046 { 00:37:19.046 "name": "BaseBdev2", 00:37:19.046 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:19.046 "is_configured": true, 00:37:19.046 "data_offset": 2048, 00:37:19.046 "data_size": 63488 00:37:19.046 } 00:37:19.046 ] 00:37:19.046 }' 00:37:19.046 19:30:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:19.046 19:30:34 -- common/autotest_common.sh@10 -- # set +x 00:37:19.614 19:30:35 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:19.873 [2024-04-18 19:30:35.627592] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:37:19.873 [2024-04-18 19:30:35.627886] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:19.873 [2024-04-18 19:30:35.646939] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca42d0 00:37:19.873 [2024-04-18 19:30:35.649429] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:19.873 19:30:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:37:20.809 19:30:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:20.809 19:30:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:20.809 19:30:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:20.809 19:30:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:20.809 19:30:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:20.809 19:30:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:20.809 19:30:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.067 19:30:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:21.067 "name": "raid_bdev1", 00:37:21.067 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:21.067 "strip_size_kb": 0, 00:37:21.067 "state": "online", 00:37:21.067 "raid_level": "raid1", 00:37:21.067 "superblock": true, 00:37:21.067 "num_base_bdevs": 2, 00:37:21.067 "num_base_bdevs_discovered": 2, 00:37:21.067 "num_base_bdevs_operational": 2, 00:37:21.067 "process": { 00:37:21.067 "type": "rebuild", 00:37:21.067 "target": "spare", 00:37:21.067 "progress": { 00:37:21.068 "blocks": 24576, 00:37:21.068 "percent": 38 00:37:21.068 } 00:37:21.068 }, 00:37:21.068 "base_bdevs_list": [ 00:37:21.068 { 00:37:21.068 "name": "spare", 00:37:21.068 "uuid": "adac53ba-7952-55df-b389-8ad82d3ddeac", 00:37:21.068 "is_configured": true, 00:37:21.068 "data_offset": 2048, 00:37:21.068 "data_size": 63488 00:37:21.068 }, 00:37:21.068 { 00:37:21.068 "name": "BaseBdev2", 00:37:21.068 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:21.068 "is_configured": true, 00:37:21.068 "data_offset": 2048, 00:37:21.068 "data_size": 63488 00:37:21.068 } 00:37:21.068 ] 00:37:21.068 }' 00:37:21.068 19:30:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:21.068 19:30:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:21.068 19:30:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:21.327 19:30:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:21.327 19:30:37 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:21.586 [2024-04-18 19:30:37.287157] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:21.586 [2024-04-18 19:30:37.359694] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:21.586 [2024-04-18 19:30:37.359987] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.586 19:30:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:21.845 19:30:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:21.845 "name": "raid_bdev1", 00:37:21.845 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:21.845 "strip_size_kb": 0, 00:37:21.845 "state": "online", 00:37:21.845 "raid_level": "raid1", 00:37:21.845 "superblock": true, 00:37:21.845 "num_base_bdevs": 2, 00:37:21.845 "num_base_bdevs_discovered": 1, 00:37:21.845 "num_base_bdevs_operational": 1, 00:37:21.845 "base_bdevs_list": [ 00:37:21.845 { 00:37:21.845 "name": null, 00:37:21.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.845 "is_configured": false, 00:37:21.845 "data_offset": 2048, 00:37:21.845 "data_size": 63488 00:37:21.845 }, 00:37:21.845 { 00:37:21.845 "name": "BaseBdev2", 00:37:21.845 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:21.845 "is_configured": true, 00:37:21.845 "data_offset": 2048, 00:37:21.845 "data_size": 63488 00:37:21.845 } 00:37:21.845 ] 00:37:21.845 }' 00:37:21.845 19:30:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:21.845 19:30:37 -- common/autotest_common.sh@10 -- # set +x 00:37:22.412 19:30:38 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:22.412 19:30:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:22.412 19:30:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:22.412 19:30:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:22.412 19:30:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:22.413 19:30:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.413 19:30:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:22.671 19:30:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:22.671 "name": "raid_bdev1", 00:37:22.671 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:22.671 "strip_size_kb": 0, 00:37:22.671 "state": "online", 00:37:22.671 "raid_level": "raid1", 00:37:22.672 "superblock": true, 00:37:22.672 "num_base_bdevs": 2, 00:37:22.672 "num_base_bdevs_discovered": 1, 00:37:22.672 "num_base_bdevs_operational": 1, 00:37:22.672 "base_bdevs_list": [ 00:37:22.672 { 00:37:22.672 "name": null, 00:37:22.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.672 "is_configured": false, 00:37:22.672 "data_offset": 2048, 00:37:22.672 "data_size": 63488 00:37:22.672 }, 00:37:22.672 { 00:37:22.672 "name": "BaseBdev2", 00:37:22.672 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:22.672 "is_configured": true, 00:37:22.672 "data_offset": 2048, 00:37:22.672 "data_size": 63488 00:37:22.672 } 00:37:22.672 ] 00:37:22.672 }' 00:37:22.672 19:30:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:22.672 19:30:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:22.672 19:30:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:22.672 19:30:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:22.672 19:30:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:22.997 [2024-04-18 19:30:38.822299] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:37:22.997 [2024-04-18 19:30:38.822540] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:22.997 [2024-04-18 19:30:38.841056] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4470 00:37:22.997 [2024-04-18 19:30:38.843343] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:22.997 19:30:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:37:23.931 19:30:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:23.931 19:30:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:23.931 19:30:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:23.931 19:30:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:23.931 19:30:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:24.189 19:30:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.189 19:30:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.448 19:30:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:24.448 "name": "raid_bdev1", 00:37:24.448 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:24.448 "strip_size_kb": 0, 00:37:24.448 "state": "online", 00:37:24.448 "raid_level": "raid1", 00:37:24.448 "superblock": true, 00:37:24.448 "num_base_bdevs": 2, 00:37:24.448 "num_base_bdevs_discovered": 2, 00:37:24.448 "num_base_bdevs_operational": 2, 00:37:24.448 "process": { 00:37:24.448 "type": "rebuild", 00:37:24.448 "target": "spare", 00:37:24.448 "progress": { 00:37:24.448 "blocks": 24576, 00:37:24.448 "percent": 38 00:37:24.448 } 00:37:24.448 }, 00:37:24.448 "base_bdevs_list": [ 00:37:24.448 { 00:37:24.448 "name": "spare", 00:37:24.448 "uuid": "adac53ba-7952-55df-b389-8ad82d3ddeac", 00:37:24.448 "is_configured": true, 00:37:24.448 "data_offset": 2048, 00:37:24.448 "data_size": 63488 00:37:24.448 }, 00:37:24.448 { 00:37:24.448 "name": "BaseBdev2", 00:37:24.448 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:24.448 "is_configured": true, 00:37:24.448 "data_offset": 2048, 00:37:24.449 "data_size": 63488 00:37:24.449 } 00:37:24.449 ] 00:37:24.449 }' 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:37:24.449 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@657 -- # local timeout=485 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.449 19:30:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.707 19:30:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:24.707 "name": "raid_bdev1", 00:37:24.707 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:24.707 "strip_size_kb": 0, 00:37:24.707 "state": "online", 00:37:24.707 "raid_level": "raid1", 00:37:24.707 "superblock": true, 00:37:24.707 "num_base_bdevs": 2, 00:37:24.707 "num_base_bdevs_discovered": 2, 00:37:24.707 "num_base_bdevs_operational": 2, 00:37:24.707 "process": { 00:37:24.707 "type": "rebuild", 00:37:24.707 "target": "spare", 00:37:24.707 "progress": { 00:37:24.707 "blocks": 32768, 00:37:24.707 "percent": 51 00:37:24.707 } 00:37:24.707 }, 00:37:24.707 "base_bdevs_list": [ 00:37:24.707 { 00:37:24.707 "name": "spare", 00:37:24.707 "uuid": "adac53ba-7952-55df-b389-8ad82d3ddeac", 00:37:24.707 "is_configured": true, 00:37:24.707 "data_offset": 2048, 00:37:24.707 "data_size": 63488 00:37:24.707 }, 00:37:24.707 { 00:37:24.707 "name": "BaseBdev2", 00:37:24.707 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:24.707 "is_configured": true, 00:37:24.707 "data_offset": 2048, 00:37:24.707 "data_size": 63488 00:37:24.707 } 00:37:24.707 ] 00:37:24.707 }' 00:37:24.707 19:30:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:24.707 19:30:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:24.708 19:30:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:24.967 19:30:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:24.967 19:30:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:25.904 19:30:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:25.904 19:30:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:25.904 19:30:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:25.904 19:30:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:25.904 19:30:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:25.904 19:30:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:25.904 19:30:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:25.904 19:30:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.163 [2024-04-18 19:30:41.961360] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:26.163 [2024-04-18 19:30:41.961697] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:26.163 [2024-04-18 19:30:41.961943] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:26.163 19:30:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:26.163 "name": "raid_bdev1", 00:37:26.163 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:26.163 "strip_size_kb": 0, 00:37:26.163 "state": "online", 00:37:26.163 "raid_level": "raid1", 00:37:26.163 "superblock": true, 00:37:26.163 "num_base_bdevs": 2, 00:37:26.163 "num_base_bdevs_discovered": 2, 00:37:26.163 "num_base_bdevs_operational": 2, 00:37:26.163 "base_bdevs_list": [ 00:37:26.163 { 00:37:26.163 "name": "spare", 00:37:26.163 "uuid": "adac53ba-7952-55df-b389-8ad82d3ddeac", 00:37:26.163 "is_configured": true, 00:37:26.163 "data_offset": 2048, 00:37:26.163 "data_size": 63488 00:37:26.163 }, 00:37:26.163 { 00:37:26.163 "name": "BaseBdev2", 00:37:26.163 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:26.163 "is_configured": true, 00:37:26.163 "data_offset": 2048, 00:37:26.163 "data_size": 63488 00:37:26.163 } 00:37:26.163 ] 00:37:26.163 }' 00:37:26.163 19:30:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:26.163 19:30:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:26.163 19:30:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:26.421 19:30:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:37:26.421 19:30:42 -- bdev/bdev_raid.sh@660 -- # break 00:37:26.421 19:30:42 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:26.421 19:30:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:26.421 19:30:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:26.421 19:30:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:26.421 19:30:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:26.421 19:30:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.421 19:30:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:26.680 "name": "raid_bdev1", 00:37:26.680 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:26.680 "strip_size_kb": 0, 00:37:26.680 "state": "online", 00:37:26.680 "raid_level": "raid1", 00:37:26.680 "superblock": true, 00:37:26.680 "num_base_bdevs": 2, 00:37:26.680 "num_base_bdevs_discovered": 2, 00:37:26.680 "num_base_bdevs_operational": 2, 00:37:26.680 "base_bdevs_list": [ 00:37:26.680 { 00:37:26.680 "name": "spare", 00:37:26.680 "uuid": "adac53ba-7952-55df-b389-8ad82d3ddeac", 00:37:26.680 "is_configured": true, 00:37:26.680 "data_offset": 2048, 00:37:26.680 "data_size": 63488 00:37:26.680 }, 00:37:26.680 { 00:37:26.680 "name": "BaseBdev2", 00:37:26.680 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:26.680 "is_configured": true, 00:37:26.680 "data_offset": 2048, 00:37:26.680 "data_size": 63488 00:37:26.680 } 00:37:26.680 ] 00:37:26.680 }' 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.680 19:30:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.939 19:30:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:26.939 "name": "raid_bdev1", 00:37:26.939 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:26.939 "strip_size_kb": 0, 00:37:26.939 "state": "online", 00:37:26.939 "raid_level": "raid1", 00:37:26.939 "superblock": true, 00:37:26.939 "num_base_bdevs": 2, 00:37:26.939 "num_base_bdevs_discovered": 2, 00:37:26.939 "num_base_bdevs_operational": 2, 00:37:26.939 "base_bdevs_list": [ 00:37:26.939 { 00:37:26.939 "name": "spare", 00:37:26.939 "uuid": "adac53ba-7952-55df-b389-8ad82d3ddeac", 00:37:26.939 "is_configured": true, 00:37:26.939 "data_offset": 2048, 00:37:26.939 "data_size": 63488 00:37:26.939 }, 00:37:26.939 { 00:37:26.939 "name": "BaseBdev2", 00:37:26.939 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:26.939 "is_configured": true, 00:37:26.939 "data_offset": 2048, 00:37:26.939 "data_size": 63488 00:37:26.939 } 00:37:26.939 ] 00:37:26.939 }' 00:37:26.939 19:30:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:26.939 19:30:42 -- common/autotest_common.sh@10 -- # set +x 00:37:27.873 19:30:43 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:27.873 [2024-04-18 19:30:43.746682] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:27.873 [2024-04-18 19:30:43.746901] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:27.873 [2024-04-18 19:30:43.747089] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:27.873 [2024-04-18 19:30:43.747260] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:27.873 [2024-04-18 19:30:43.747353] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:37:27.873 19:30:43 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:27.873 19:30:43 -- bdev/bdev_raid.sh@671 -- # jq length 00:37:28.131 19:30:43 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:37:28.131 19:30:43 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:37:28.131 19:30:43 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:28.131 19:30:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:28.131 19:30:43 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:28.131 19:30:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:28.131 19:30:43 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:28.131 19:30:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:28.131 19:30:43 -- bdev/nbd_common.sh@12 -- # local i 00:37:28.131 19:30:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:28.131 19:30:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:28.131 19:30:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:28.389 /dev/nbd0 00:37:28.389 19:30:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:28.389 19:30:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:28.389 19:30:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:37:28.389 19:30:44 -- common/autotest_common.sh@855 -- # local i 00:37:28.389 19:30:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:28.389 19:30:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:28.389 19:30:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:37:28.389 19:30:44 -- common/autotest_common.sh@859 -- # break 00:37:28.389 19:30:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:28.389 19:30:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:28.389 19:30:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:28.389 1+0 records in 00:37:28.389 1+0 records out 00:37:28.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508232 s, 8.1 MB/s 00:37:28.389 19:30:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:28.389 19:30:44 -- common/autotest_common.sh@872 -- # size=4096 00:37:28.389 19:30:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:28.389 19:30:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:28.389 19:30:44 -- common/autotest_common.sh@875 -- # return 0 00:37:28.389 19:30:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:28.389 19:30:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:28.389 19:30:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:37:28.956 /dev/nbd1 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:28.956 19:30:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:37:28.956 19:30:44 -- common/autotest_common.sh@855 -- # local i 00:37:28.956 19:30:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:28.956 19:30:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:28.956 19:30:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:37:28.956 19:30:44 -- common/autotest_common.sh@859 -- # break 00:37:28.956 19:30:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:28.956 19:30:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:28.956 19:30:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:28.956 1+0 records in 00:37:28.956 1+0 records out 00:37:28.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472922 s, 8.7 MB/s 00:37:28.956 19:30:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:28.956 19:30:44 -- common/autotest_common.sh@872 -- # size=4096 00:37:28.956 19:30:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:28.956 19:30:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:28.956 19:30:44 -- common/autotest_common.sh@875 -- # return 0 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:28.956 19:30:44 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:28.956 19:30:44 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@51 -- # local i 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:28.956 19:30:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@41 -- # break 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@45 -- # return 0 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:29.215 19:30:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@41 -- # break 00:37:29.782 19:30:45 -- bdev/nbd_common.sh@45 -- # return 0 00:37:29.782 19:30:45 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:37:29.782 19:30:45 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:37:29.782 19:30:45 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:37:29.782 19:30:45 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:37:30.040 19:30:45 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:30.298 [2024-04-18 19:30:46.095598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:30.298 [2024-04-18 19:30:46.095879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:30.298 [2024-04-18 19:30:46.096005] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:37:30.298 [2024-04-18 19:30:46.096107] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:30.298 [2024-04-18 19:30:46.098681] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:30.298 [2024-04-18 19:30:46.098897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:30.298 [2024-04-18 19:30:46.099116] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:30.298 [2024-04-18 19:30:46.099272] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:30.298 BaseBdev1 00:37:30.298 19:30:46 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:37:30.298 19:30:46 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:37:30.298 19:30:46 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:37:30.557 19:30:46 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:30.817 [2024-04-18 19:30:46.631712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:30.817 [2024-04-18 19:30:46.632001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:30.817 [2024-04-18 19:30:46.632117] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:37:30.817 [2024-04-18 19:30:46.632262] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:30.817 [2024-04-18 19:30:46.632754] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:30.817 [2024-04-18 19:30:46.632913] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:30.817 [2024-04-18 19:30:46.633091] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:37:30.817 [2024-04-18 19:30:46.633216] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:37:30.817 [2024-04-18 19:30:46.633308] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:30.817 [2024-04-18 19:30:46.633350] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:37:30.817 [2024-04-18 19:30:46.633576] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:30.817 BaseBdev2 00:37:30.817 19:30:46 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:31.075 19:30:46 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:31.333 [2024-04-18 19:30:47.027799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:31.333 [2024-04-18 19:30:47.028052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:31.333 [2024-04-18 19:30:47.028142] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:31.333 [2024-04-18 19:30:47.028312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:31.333 [2024-04-18 19:30:47.028834] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:31.333 [2024-04-18 19:30:47.028983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:31.333 [2024-04-18 19:30:47.029224] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:37:31.333 [2024-04-18 19:30:47.029329] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:31.333 spare 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.333 [2024-04-18 19:30:47.129469] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:37:31.333 [2024-04-18 19:30:47.129644] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:31.333 [2024-04-18 19:30:47.129839] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc4fb0 00:37:31.333 [2024-04-18 19:30:47.130314] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:37:31.333 [2024-04-18 19:30:47.130408] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:37:31.333 [2024-04-18 19:30:47.130605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:31.333 19:30:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:31.333 "name": "raid_bdev1", 00:37:31.333 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:31.333 "strip_size_kb": 0, 00:37:31.333 "state": "online", 00:37:31.333 "raid_level": "raid1", 00:37:31.333 "superblock": true, 00:37:31.333 "num_base_bdevs": 2, 00:37:31.333 "num_base_bdevs_discovered": 2, 00:37:31.333 "num_base_bdevs_operational": 2, 00:37:31.333 "base_bdevs_list": [ 00:37:31.333 { 00:37:31.333 "name": "spare", 00:37:31.333 "uuid": "adac53ba-7952-55df-b389-8ad82d3ddeac", 00:37:31.333 "is_configured": true, 00:37:31.333 "data_offset": 2048, 00:37:31.333 "data_size": 63488 00:37:31.333 }, 00:37:31.333 { 00:37:31.333 "name": "BaseBdev2", 00:37:31.333 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:31.333 "is_configured": true, 00:37:31.333 "data_offset": 2048, 00:37:31.333 "data_size": 63488 00:37:31.333 } 00:37:31.333 ] 00:37:31.334 }' 00:37:31.334 19:30:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:31.334 19:30:47 -- common/autotest_common.sh@10 -- # set +x 00:37:32.268 19:30:47 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:32.268 19:30:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:32.268 19:30:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:32.269 19:30:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:32.269 19:30:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:32.269 19:30:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:32.269 19:30:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.269 19:30:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:32.269 "name": "raid_bdev1", 00:37:32.269 "uuid": "60b1e8d6-1f99-49d0-af4b-a80ff8f8b66c", 00:37:32.269 "strip_size_kb": 0, 00:37:32.269 "state": "online", 00:37:32.269 "raid_level": "raid1", 00:37:32.269 "superblock": true, 00:37:32.269 "num_base_bdevs": 2, 00:37:32.269 "num_base_bdevs_discovered": 2, 00:37:32.269 "num_base_bdevs_operational": 2, 00:37:32.269 "base_bdevs_list": [ 00:37:32.269 { 00:37:32.269 "name": "spare", 00:37:32.269 "uuid": "adac53ba-7952-55df-b389-8ad82d3ddeac", 00:37:32.269 "is_configured": true, 00:37:32.269 "data_offset": 2048, 00:37:32.269 "data_size": 63488 00:37:32.269 }, 00:37:32.269 { 00:37:32.269 "name": "BaseBdev2", 00:37:32.269 "uuid": "ffe0aedc-9529-5c30-82b2-7ee12e21ed3f", 00:37:32.269 "is_configured": true, 00:37:32.269 "data_offset": 2048, 00:37:32.269 "data_size": 63488 00:37:32.269 } 00:37:32.269 ] 00:37:32.269 }' 00:37:32.269 19:30:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:32.527 19:30:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:32.527 19:30:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:32.527 19:30:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:32.527 19:30:48 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:32.527 19:30:48 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:32.785 19:30:48 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:37:32.785 19:30:48 -- bdev/bdev_raid.sh@709 -- # killprocess 133173 00:37:32.785 19:30:48 -- common/autotest_common.sh@936 -- # '[' -z 133173 ']' 00:37:32.785 19:30:48 -- common/autotest_common.sh@940 -- # kill -0 133173 00:37:32.785 19:30:48 -- common/autotest_common.sh@941 -- # uname 00:37:32.785 19:30:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:32.785 19:30:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133173 00:37:32.785 killing process with pid 133173 00:37:32.785 Received shutdown signal, test time was about 60.000000 seconds 00:37:32.785 00:37:32.785 Latency(us) 00:37:32.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.785 =================================================================================================================== 00:37:32.785 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:32.785 19:30:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:37:32.785 19:30:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:37:32.785 19:30:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133173' 00:37:32.785 19:30:48 -- common/autotest_common.sh@955 -- # kill 133173 00:37:32.785 19:30:48 -- common/autotest_common.sh@960 -- # wait 133173 00:37:32.785 [2024-04-18 19:30:48.571274] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:32.785 [2024-04-18 19:30:48.571381] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:32.785 [2024-04-18 19:30:48.571466] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:32.785 [2024-04-18 19:30:48.571478] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:37:33.043 [2024-04-18 19:30:48.922932] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:34.945 ************************************ 00:37:34.945 END TEST raid_rebuild_test_sb 00:37:34.945 ************************************ 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@711 -- # return 0 00:37:34.945 00:37:34.945 real 0m27.069s 00:37:34.945 user 0m39.179s 00:37:34.945 sys 0m5.182s 00:37:34.945 19:30:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:34.945 19:30:50 -- common/autotest_common.sh@10 -- # set +x 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:37:34.945 19:30:50 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:37:34.945 19:30:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:34.945 19:30:50 -- common/autotest_common.sh@10 -- # set +x 00:37:34.945 ************************************ 00:37:34.945 START TEST raid_rebuild_test_io 00:37:34.945 ************************************ 00:37:34.945 19:30:50 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 false true 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@544 -- # raid_pid=133858 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133858 /var/tmp/spdk-raid.sock 00:37:34.945 19:30:50 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:34.945 19:30:50 -- common/autotest_common.sh@817 -- # '[' -z 133858 ']' 00:37:34.945 19:30:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:34.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:34.945 19:30:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:37:34.945 19:30:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:34.945 19:30:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:37:34.945 19:30:50 -- common/autotest_common.sh@10 -- # set +x 00:37:34.945 [2024-04-18 19:30:50.609547] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:37:34.945 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:34.945 Zero copy mechanism will not be used. 00:37:34.945 [2024-04-18 19:30:50.609748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133858 ] 00:37:34.945 [2024-04-18 19:30:50.784694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.203 [2024-04-18 19:30:51.043557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.461 [2024-04-18 19:30:51.297147] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:35.719 19:30:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:37:35.719 19:30:51 -- common/autotest_common.sh@850 -- # return 0 00:37:35.719 19:30:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:35.719 19:30:51 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:37:35.719 19:30:51 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:35.978 BaseBdev1 00:37:35.978 19:30:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:35.978 19:30:51 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:37:35.978 19:30:51 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:37:36.237 BaseBdev2 00:37:36.237 19:30:52 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:37:36.495 spare_malloc 00:37:36.495 19:30:52 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:36.753 spare_delay 00:37:36.753 19:30:52 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:37.011 [2024-04-18 19:30:52.911321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:37.011 [2024-04-18 19:30:52.911456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:37.011 [2024-04-18 19:30:52.911495] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:37.011 [2024-04-18 19:30:52.911550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:37.011 [2024-04-18 19:30:52.914267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:37.011 [2024-04-18 19:30:52.914336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:37.011 spare 00:37:37.011 19:30:52 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:37:37.269 [2024-04-18 19:30:53.195575] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:37.526 [2024-04-18 19:30:53.197830] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:37.526 [2024-04-18 19:30:53.197960] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:37:37.526 [2024-04-18 19:30:53.197974] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:37:37.526 [2024-04-18 19:30:53.198166] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:37:37.526 [2024-04-18 19:30:53.198499] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:37:37.526 [2024-04-18 19:30:53.198513] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:37:37.526 [2024-04-18 19:30:53.198693] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:37.526 "name": "raid_bdev1", 00:37:37.526 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:37.526 "strip_size_kb": 0, 00:37:37.526 "state": "online", 00:37:37.526 "raid_level": "raid1", 00:37:37.526 "superblock": false, 00:37:37.526 "num_base_bdevs": 2, 00:37:37.526 "num_base_bdevs_discovered": 2, 00:37:37.526 "num_base_bdevs_operational": 2, 00:37:37.526 "base_bdevs_list": [ 00:37:37.526 { 00:37:37.526 "name": "BaseBdev1", 00:37:37.526 "uuid": "13101314-a7c2-433c-9c3f-f050c12b8347", 00:37:37.526 "is_configured": true, 00:37:37.526 "data_offset": 0, 00:37:37.526 "data_size": 65536 00:37:37.526 }, 00:37:37.526 { 00:37:37.526 "name": "BaseBdev2", 00:37:37.526 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:37.526 "is_configured": true, 00:37:37.526 "data_offset": 0, 00:37:37.526 "data_size": 65536 00:37:37.526 } 00:37:37.526 ] 00:37:37.526 }' 00:37:37.526 19:30:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:37.526 19:30:53 -- common/autotest_common.sh@10 -- # set +x 00:37:38.460 19:30:54 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:38.460 19:30:54 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:37:38.460 [2024-04-18 19:30:54.312169] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:38.460 19:30:54 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:37:38.460 19:30:54 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.460 19:30:54 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:38.718 19:30:54 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:37:38.718 19:30:54 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:37:38.718 19:30:54 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:38.718 19:30:54 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:37:38.718 [2024-04-18 19:30:54.610331] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:37:38.718 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:38.718 Zero copy mechanism will not be used. 00:37:38.718 Running I/O for 60 seconds... 00:37:38.976 [2024-04-18 19:30:54.707971] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:38.976 [2024-04-18 19:30:54.708193] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.976 19:30:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:39.234 19:30:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:39.234 "name": "raid_bdev1", 00:37:39.234 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:39.234 "strip_size_kb": 0, 00:37:39.234 "state": "online", 00:37:39.234 "raid_level": "raid1", 00:37:39.234 "superblock": false, 00:37:39.234 "num_base_bdevs": 2, 00:37:39.234 "num_base_bdevs_discovered": 1, 00:37:39.234 "num_base_bdevs_operational": 1, 00:37:39.234 "base_bdevs_list": [ 00:37:39.234 { 00:37:39.234 "name": null, 00:37:39.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:39.234 "is_configured": false, 00:37:39.234 "data_offset": 0, 00:37:39.234 "data_size": 65536 00:37:39.234 }, 00:37:39.234 { 00:37:39.234 "name": "BaseBdev2", 00:37:39.234 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:39.234 "is_configured": true, 00:37:39.234 "data_offset": 0, 00:37:39.234 "data_size": 65536 00:37:39.234 } 00:37:39.234 ] 00:37:39.234 }' 00:37:39.234 19:30:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:39.234 19:30:54 -- common/autotest_common.sh@10 -- # set +x 00:37:39.801 19:30:55 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:40.060 [2024-04-18 19:30:55.824391] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:37:40.060 [2024-04-18 19:30:55.824456] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:40.060 19:30:55 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:37:40.060 [2024-04-18 19:30:55.880595] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:37:40.060 [2024-04-18 19:30:55.882667] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:40.060 [2024-04-18 19:30:55.984894] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:40.060 [2024-04-18 19:30:55.985492] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:40.317 [2024-04-18 19:30:56.110853] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:40.317 [2024-04-18 19:30:56.111194] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:40.576 [2024-04-18 19:30:56.441456] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:37:40.576 [2024-04-18 19:30:56.442057] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:37:40.835 [2024-04-18 19:30:56.567514] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:37:40.835 [2024-04-18 19:30:56.567841] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:37:41.113 19:30:56 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:41.113 19:30:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:41.113 19:30:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:41.113 19:30:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:41.113 19:30:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:41.113 19:30:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:41.113 19:30:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:41.113 [2024-04-18 19:30:56.992912] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:37:41.371 19:30:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:41.371 "name": "raid_bdev1", 00:37:41.371 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:41.371 "strip_size_kb": 0, 00:37:41.371 "state": "online", 00:37:41.371 "raid_level": "raid1", 00:37:41.371 "superblock": false, 00:37:41.371 "num_base_bdevs": 2, 00:37:41.371 "num_base_bdevs_discovered": 2, 00:37:41.371 "num_base_bdevs_operational": 2, 00:37:41.371 "process": { 00:37:41.371 "type": "rebuild", 00:37:41.371 "target": "spare", 00:37:41.371 "progress": { 00:37:41.371 "blocks": 16384, 00:37:41.371 "percent": 25 00:37:41.371 } 00:37:41.371 }, 00:37:41.371 "base_bdevs_list": [ 00:37:41.371 { 00:37:41.371 "name": "spare", 00:37:41.371 "uuid": "490a1de5-2bd9-55d0-a2e0-1c3bf2ee2a70", 00:37:41.371 "is_configured": true, 00:37:41.371 "data_offset": 0, 00:37:41.371 "data_size": 65536 00:37:41.371 }, 00:37:41.371 { 00:37:41.371 "name": "BaseBdev2", 00:37:41.371 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:41.371 "is_configured": true, 00:37:41.371 "data_offset": 0, 00:37:41.371 "data_size": 65536 00:37:41.371 } 00:37:41.371 ] 00:37:41.371 }' 00:37:41.371 19:30:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:41.371 19:30:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:41.371 19:30:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:41.371 19:30:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:41.371 19:30:57 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:41.630 [2024-04-18 19:30:57.443758] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:37:41.630 [2024-04-18 19:30:57.539129] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:41.889 [2024-04-18 19:30:57.662495] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:41.889 [2024-04-18 19:30:57.672053] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:41.889 [2024-04-18 19:30:57.713015] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:41.889 19:30:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:42.147 19:30:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:42.147 "name": "raid_bdev1", 00:37:42.147 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:42.147 "strip_size_kb": 0, 00:37:42.147 "state": "online", 00:37:42.147 "raid_level": "raid1", 00:37:42.147 "superblock": false, 00:37:42.147 "num_base_bdevs": 2, 00:37:42.147 "num_base_bdevs_discovered": 1, 00:37:42.147 "num_base_bdevs_operational": 1, 00:37:42.147 "base_bdevs_list": [ 00:37:42.147 { 00:37:42.147 "name": null, 00:37:42.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:42.147 "is_configured": false, 00:37:42.147 "data_offset": 0, 00:37:42.147 "data_size": 65536 00:37:42.147 }, 00:37:42.147 { 00:37:42.147 "name": "BaseBdev2", 00:37:42.147 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:42.147 "is_configured": true, 00:37:42.147 "data_offset": 0, 00:37:42.147 "data_size": 65536 00:37:42.147 } 00:37:42.147 ] 00:37:42.147 }' 00:37:42.147 19:30:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:42.147 19:30:58 -- common/autotest_common.sh@10 -- # set +x 00:37:43.083 19:30:58 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:43.083 19:30:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:43.083 19:30:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:43.083 19:30:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:43.083 19:30:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:43.083 19:30:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:43.083 19:30:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:43.342 19:30:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:43.342 "name": "raid_bdev1", 00:37:43.342 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:43.342 "strip_size_kb": 0, 00:37:43.342 "state": "online", 00:37:43.342 "raid_level": "raid1", 00:37:43.342 "superblock": false, 00:37:43.342 "num_base_bdevs": 2, 00:37:43.342 "num_base_bdevs_discovered": 1, 00:37:43.342 "num_base_bdevs_operational": 1, 00:37:43.342 "base_bdevs_list": [ 00:37:43.342 { 00:37:43.342 "name": null, 00:37:43.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.342 "is_configured": false, 00:37:43.342 "data_offset": 0, 00:37:43.342 "data_size": 65536 00:37:43.342 }, 00:37:43.342 { 00:37:43.342 "name": "BaseBdev2", 00:37:43.342 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:43.342 "is_configured": true, 00:37:43.342 "data_offset": 0, 00:37:43.342 "data_size": 65536 00:37:43.342 } 00:37:43.342 ] 00:37:43.342 }' 00:37:43.342 19:30:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:43.342 19:30:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:43.342 19:30:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:43.342 19:30:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:43.342 19:30:59 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:43.599 [2024-04-18 19:30:59.352741] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:37:43.599 [2024-04-18 19:30:59.352806] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:43.599 19:30:59 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:37:43.599 [2024-04-18 19:30:59.407900] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:37:43.599 [2024-04-18 19:30:59.410072] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:43.858 [2024-04-18 19:30:59.526144] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:43.858 [2024-04-18 19:30:59.526712] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:43.858 [2024-04-18 19:30:59.753883] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:43.858 [2024-04-18 19:30:59.754247] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:44.424 [2024-04-18 19:31:00.091836] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:37:44.424 [2024-04-18 19:31:00.222318] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:37:44.424 [2024-04-18 19:31:00.222638] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:37:44.682 19:31:00 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:44.682 19:31:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:44.682 19:31:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:44.682 19:31:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:44.682 19:31:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:44.682 19:31:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.682 19:31:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:44.682 [2024-04-18 19:31:00.467428] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:37:44.682 [2024-04-18 19:31:00.607672] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:37:44.682 [2024-04-18 19:31:00.608006] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:44.940 "name": "raid_bdev1", 00:37:44.940 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:44.940 "strip_size_kb": 0, 00:37:44.940 "state": "online", 00:37:44.940 "raid_level": "raid1", 00:37:44.940 "superblock": false, 00:37:44.940 "num_base_bdevs": 2, 00:37:44.940 "num_base_bdevs_discovered": 2, 00:37:44.940 "num_base_bdevs_operational": 2, 00:37:44.940 "process": { 00:37:44.940 "type": "rebuild", 00:37:44.940 "target": "spare", 00:37:44.940 "progress": { 00:37:44.940 "blocks": 16384, 00:37:44.940 "percent": 25 00:37:44.940 } 00:37:44.940 }, 00:37:44.940 "base_bdevs_list": [ 00:37:44.940 { 00:37:44.940 "name": "spare", 00:37:44.940 "uuid": "490a1de5-2bd9-55d0-a2e0-1c3bf2ee2a70", 00:37:44.940 "is_configured": true, 00:37:44.940 "data_offset": 0, 00:37:44.940 "data_size": 65536 00:37:44.940 }, 00:37:44.940 { 00:37:44.940 "name": "BaseBdev2", 00:37:44.940 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:44.940 "is_configured": true, 00:37:44.940 "data_offset": 0, 00:37:44.940 "data_size": 65536 00:37:44.940 } 00:37:44.940 ] 00:37:44.940 }' 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@657 -- # local timeout=505 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.940 19:31:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.198 19:31:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:45.198 "name": "raid_bdev1", 00:37:45.198 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:45.198 "strip_size_kb": 0, 00:37:45.198 "state": "online", 00:37:45.198 "raid_level": "raid1", 00:37:45.198 "superblock": false, 00:37:45.198 "num_base_bdevs": 2, 00:37:45.198 "num_base_bdevs_discovered": 2, 00:37:45.198 "num_base_bdevs_operational": 2, 00:37:45.198 "process": { 00:37:45.198 "type": "rebuild", 00:37:45.198 "target": "spare", 00:37:45.198 "progress": { 00:37:45.198 "blocks": 22528, 00:37:45.198 "percent": 34 00:37:45.198 } 00:37:45.198 }, 00:37:45.198 "base_bdevs_list": [ 00:37:45.198 { 00:37:45.198 "name": "spare", 00:37:45.198 "uuid": "490a1de5-2bd9-55d0-a2e0-1c3bf2ee2a70", 00:37:45.198 "is_configured": true, 00:37:45.198 "data_offset": 0, 00:37:45.198 "data_size": 65536 00:37:45.198 }, 00:37:45.198 { 00:37:45.198 "name": "BaseBdev2", 00:37:45.198 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:45.198 "is_configured": true, 00:37:45.198 "data_offset": 0, 00:37:45.198 "data_size": 65536 00:37:45.198 } 00:37:45.198 ] 00:37:45.198 }' 00:37:45.198 19:31:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:45.198 19:31:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:45.198 19:31:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:45.457 19:31:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:45.457 19:31:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:45.457 [2024-04-18 19:31:01.258877] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:37:45.715 [2024-04-18 19:31:01.493846] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:37:45.981 [2024-04-18 19:31:01.735264] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:37:45.981 [2024-04-18 19:31:01.735587] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:37:46.246 19:31:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:46.246 19:31:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:46.246 19:31:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:46.246 19:31:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:46.246 19:31:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:46.246 19:31:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:46.246 19:31:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:46.246 19:31:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:46.505 19:31:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:46.505 "name": "raid_bdev1", 00:37:46.505 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:46.505 "strip_size_kb": 0, 00:37:46.505 "state": "online", 00:37:46.505 "raid_level": "raid1", 00:37:46.505 "superblock": false, 00:37:46.505 "num_base_bdevs": 2, 00:37:46.505 "num_base_bdevs_discovered": 2, 00:37:46.505 "num_base_bdevs_operational": 2, 00:37:46.505 "process": { 00:37:46.505 "type": "rebuild", 00:37:46.505 "target": "spare", 00:37:46.505 "progress": { 00:37:46.505 "blocks": 43008, 00:37:46.505 "percent": 65 00:37:46.505 } 00:37:46.505 }, 00:37:46.505 "base_bdevs_list": [ 00:37:46.505 { 00:37:46.505 "name": "spare", 00:37:46.505 "uuid": "490a1de5-2bd9-55d0-a2e0-1c3bf2ee2a70", 00:37:46.505 "is_configured": true, 00:37:46.505 "data_offset": 0, 00:37:46.505 "data_size": 65536 00:37:46.505 }, 00:37:46.505 { 00:37:46.505 "name": "BaseBdev2", 00:37:46.505 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:46.505 "is_configured": true, 00:37:46.505 "data_offset": 0, 00:37:46.505 "data_size": 65536 00:37:46.505 } 00:37:46.505 ] 00:37:46.505 }' 00:37:46.505 19:31:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:46.505 19:31:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:46.505 19:31:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:46.772 19:31:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:37:46.772 19:31:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:46.772 [2024-04-18 19:31:02.531251] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:37:47.341 [2024-04-18 19:31:02.989103] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:37:47.598 [2024-04-18 19:31:03.332577] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:37:47.598 19:31:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:37:47.598 19:31:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:47.598 19:31:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:47.598 19:31:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:37:47.598 19:31:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:37:47.599 19:31:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:47.599 19:31:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:47.599 19:31:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:47.857 [2024-04-18 19:31:03.665791] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:47.857 [2024-04-18 19:31:03.765786] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:47.857 [2024-04-18 19:31:03.768227] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:48.115 "name": "raid_bdev1", 00:37:48.115 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:48.115 "strip_size_kb": 0, 00:37:48.115 "state": "online", 00:37:48.115 "raid_level": "raid1", 00:37:48.115 "superblock": false, 00:37:48.115 "num_base_bdevs": 2, 00:37:48.115 "num_base_bdevs_discovered": 2, 00:37:48.115 "num_base_bdevs_operational": 2, 00:37:48.115 "base_bdevs_list": [ 00:37:48.115 { 00:37:48.115 "name": "spare", 00:37:48.115 "uuid": "490a1de5-2bd9-55d0-a2e0-1c3bf2ee2a70", 00:37:48.115 "is_configured": true, 00:37:48.115 "data_offset": 0, 00:37:48.115 "data_size": 65536 00:37:48.115 }, 00:37:48.115 { 00:37:48.115 "name": "BaseBdev2", 00:37:48.115 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:48.115 "is_configured": true, 00:37:48.115 "data_offset": 0, 00:37:48.115 "data_size": 65536 00:37:48.115 } 00:37:48.115 ] 00:37:48.115 }' 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@660 -- # break 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.115 19:31:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:37:48.432 "name": "raid_bdev1", 00:37:48.432 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:48.432 "strip_size_kb": 0, 00:37:48.432 "state": "online", 00:37:48.432 "raid_level": "raid1", 00:37:48.432 "superblock": false, 00:37:48.432 "num_base_bdevs": 2, 00:37:48.432 "num_base_bdevs_discovered": 2, 00:37:48.432 "num_base_bdevs_operational": 2, 00:37:48.432 "base_bdevs_list": [ 00:37:48.432 { 00:37:48.432 "name": "spare", 00:37:48.432 "uuid": "490a1de5-2bd9-55d0-a2e0-1c3bf2ee2a70", 00:37:48.432 "is_configured": true, 00:37:48.432 "data_offset": 0, 00:37:48.432 "data_size": 65536 00:37:48.432 }, 00:37:48.432 { 00:37:48.432 "name": "BaseBdev2", 00:37:48.432 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:48.432 "is_configured": true, 00:37:48.432 "data_offset": 0, 00:37:48.432 "data_size": 65536 00:37:48.432 } 00:37:48.432 ] 00:37:48.432 }' 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.432 19:31:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.691 19:31:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:48.691 "name": "raid_bdev1", 00:37:48.691 "uuid": "d03fa7fe-e98e-422f-92d9-0ecfb06beba1", 00:37:48.691 "strip_size_kb": 0, 00:37:48.691 "state": "online", 00:37:48.691 "raid_level": "raid1", 00:37:48.691 "superblock": false, 00:37:48.691 "num_base_bdevs": 2, 00:37:48.691 "num_base_bdevs_discovered": 2, 00:37:48.691 "num_base_bdevs_operational": 2, 00:37:48.691 "base_bdevs_list": [ 00:37:48.691 { 00:37:48.691 "name": "spare", 00:37:48.691 "uuid": "490a1de5-2bd9-55d0-a2e0-1c3bf2ee2a70", 00:37:48.691 "is_configured": true, 00:37:48.691 "data_offset": 0, 00:37:48.691 "data_size": 65536 00:37:48.691 }, 00:37:48.691 { 00:37:48.691 "name": "BaseBdev2", 00:37:48.691 "uuid": "f9a797c5-6193-435c-b231-ccd7f51bf4fd", 00:37:48.691 "is_configured": true, 00:37:48.691 "data_offset": 0, 00:37:48.691 "data_size": 65536 00:37:48.691 } 00:37:48.691 ] 00:37:48.691 }' 00:37:48.691 19:31:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:48.691 19:31:04 -- common/autotest_common.sh@10 -- # set +x 00:37:49.259 19:31:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:49.518 [2024-04-18 19:31:05.291353] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:49.518 [2024-04-18 19:31:05.291403] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:49.518 00:37:49.518 Latency(us) 00:37:49.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:49.518 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:37:49.518 raid_bdev1 : 10.73 109.78 329.34 0.00 0.00 12425.83 514.93 111848.11 00:37:49.518 =================================================================================================================== 00:37:49.518 Total : 109.78 329.34 0.00 0.00 12425.83 514.93 111848.11 00:37:49.518 0 00:37:49.518 [2024-04-18 19:31:05.369248] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:49.518 [2024-04-18 19:31:05.369309] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:49.518 [2024-04-18 19:31:05.369388] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:49.518 [2024-04-18 19:31:05.369400] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:37:49.518 19:31:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:49.518 19:31:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:37:49.776 19:31:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:37:49.776 19:31:05 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:37:49.776 19:31:05 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:37:49.776 19:31:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:49.776 19:31:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:49.776 19:31:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:49.776 19:31:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:49.776 19:31:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:49.776 19:31:05 -- bdev/nbd_common.sh@12 -- # local i 00:37:49.776 19:31:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:49.776 19:31:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:49.776 19:31:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:37:50.035 /dev/nbd0 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:50.035 19:31:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:37:50.035 19:31:05 -- common/autotest_common.sh@855 -- # local i 00:37:50.035 19:31:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:50.035 19:31:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:50.035 19:31:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:37:50.035 19:31:05 -- common/autotest_common.sh@859 -- # break 00:37:50.035 19:31:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:50.035 19:31:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:50.035 19:31:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:50.035 1+0 records in 00:37:50.035 1+0 records out 00:37:50.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462688 s, 8.9 MB/s 00:37:50.035 19:31:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:50.035 19:31:05 -- common/autotest_common.sh@872 -- # size=4096 00:37:50.035 19:31:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:50.035 19:31:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:50.035 19:31:05 -- common/autotest_common.sh@875 -- # return 0 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:50.035 19:31:05 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:37:50.035 19:31:05 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:37:50.035 19:31:05 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@12 -- # local i 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:50.035 19:31:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:37:50.293 /dev/nbd1 00:37:50.293 19:31:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:50.293 19:31:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:50.293 19:31:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:37:50.293 19:31:06 -- common/autotest_common.sh@855 -- # local i 00:37:50.293 19:31:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:37:50.293 19:31:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:37:50.293 19:31:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:37:50.551 19:31:06 -- common/autotest_common.sh@859 -- # break 00:37:50.551 19:31:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:37:50.551 19:31:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:37:50.551 19:31:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:50.551 1+0 records in 00:37:50.551 1+0 records out 00:37:50.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452562 s, 9.1 MB/s 00:37:50.551 19:31:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:50.551 19:31:06 -- common/autotest_common.sh@872 -- # size=4096 00:37:50.551 19:31:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:50.551 19:31:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:37:50.551 19:31:06 -- common/autotest_common.sh@875 -- # return 0 00:37:50.551 19:31:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:50.551 19:31:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:50.551 19:31:06 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:37:50.551 19:31:06 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:37:50.551 19:31:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:50.551 19:31:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:50.551 19:31:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:50.551 19:31:06 -- bdev/nbd_common.sh@51 -- # local i 00:37:50.551 19:31:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:50.551 19:31:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:37:50.864 19:31:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:50.864 19:31:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:50.864 19:31:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:50.864 19:31:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:50.864 19:31:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:50.864 19:31:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:50.864 19:31:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:37:51.124 19:31:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@41 -- # break 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@45 -- # return 0 00:37:51.125 19:31:06 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@51 -- # local i 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:51.125 19:31:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:51.387 19:31:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:51.387 19:31:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:51.387 19:31:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:51.387 19:31:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:51.387 19:31:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:51.387 19:31:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:51.387 19:31:07 -- bdev/nbd_common.sh@41 -- # break 00:37:51.387 19:31:07 -- bdev/nbd_common.sh@45 -- # return 0 00:37:51.387 19:31:07 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:37:51.387 19:31:07 -- bdev/bdev_raid.sh@709 -- # killprocess 133858 00:37:51.387 19:31:07 -- common/autotest_common.sh@936 -- # '[' -z 133858 ']' 00:37:51.387 19:31:07 -- common/autotest_common.sh@940 -- # kill -0 133858 00:37:51.387 19:31:07 -- common/autotest_common.sh@941 -- # uname 00:37:51.387 19:31:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:51.387 19:31:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133858 00:37:51.387 19:31:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:37:51.387 killing process with pid 133858 00:37:51.387 Received shutdown signal, test time was about 12.587199 seconds 00:37:51.387 00:37:51.387 Latency(us) 00:37:51.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.387 =================================================================================================================== 00:37:51.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:51.387 19:31:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:37:51.387 19:31:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133858' 00:37:51.387 19:31:07 -- common/autotest_common.sh@955 -- # kill 133858 00:37:51.387 19:31:07 -- common/autotest_common.sh@960 -- # wait 133858 00:37:51.387 [2024-04-18 19:31:07.199815] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:51.647 [2024-04-18 19:31:07.464834] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:53.553 ************************************ 00:37:53.553 END TEST raid_rebuild_test_io 00:37:53.553 ************************************ 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@711 -- # return 0 00:37:53.553 00:37:53.553 real 0m18.518s 00:37:53.553 user 0m28.389s 00:37:53.553 sys 0m2.091s 00:37:53.553 19:31:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:53.553 19:31:09 -- common/autotest_common.sh@10 -- # set +x 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:37:53.553 19:31:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:37:53.553 19:31:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:53.553 19:31:09 -- common/autotest_common.sh@10 -- # set +x 00:37:53.553 ************************************ 00:37:53.553 START TEST raid_rebuild_test_sb_io 00:37:53.553 ************************************ 00:37:53.553 19:31:09 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 true true 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@544 -- # raid_pid=134362 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134362 /var/tmp/spdk-raid.sock 00:37:53.553 19:31:09 -- common/autotest_common.sh@817 -- # '[' -z 134362 ']' 00:37:53.553 19:31:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:53.553 19:31:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:37:53.553 19:31:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:53.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:53.553 19:31:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:37:53.553 19:31:09 -- common/autotest_common.sh@10 -- # set +x 00:37:53.553 19:31:09 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:53.553 [2024-04-18 19:31:09.231264] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:37:53.554 [2024-04-18 19:31:09.231718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134362 ] 00:37:53.554 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:53.554 Zero copy mechanism will not be used. 00:37:53.554 [2024-04-18 19:31:09.410827] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.812 [2024-04-18 19:31:09.702670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.070 [2024-04-18 19:31:09.964801] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:54.327 19:31:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:37:54.327 19:31:10 -- common/autotest_common.sh@850 -- # return 0 00:37:54.327 19:31:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:54.327 19:31:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:37:54.327 19:31:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:54.585 BaseBdev1_malloc 00:37:54.585 19:31:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:54.845 [2024-04-18 19:31:10.623087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:54.845 [2024-04-18 19:31:10.623195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:54.845 [2024-04-18 19:31:10.623232] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:37:54.845 [2024-04-18 19:31:10.623284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:54.845 [2024-04-18 19:31:10.625787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:54.845 [2024-04-18 19:31:10.625835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:54.845 BaseBdev1 00:37:54.845 19:31:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:37:54.845 19:31:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:37:54.845 19:31:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:55.104 BaseBdev2_malloc 00:37:55.104 19:31:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:55.365 [2024-04-18 19:31:11.227694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:55.365 [2024-04-18 19:31:11.227985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:55.365 [2024-04-18 19:31:11.228065] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:37:55.365 [2024-04-18 19:31:11.228239] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:55.365 [2024-04-18 19:31:11.230793] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:55.365 [2024-04-18 19:31:11.230952] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:55.365 BaseBdev2 00:37:55.365 19:31:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:37:55.627 spare_malloc 00:37:55.627 19:31:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:55.893 spare_delay 00:37:55.893 19:31:11 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:56.161 [2024-04-18 19:31:11.996952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:56.161 [2024-04-18 19:31:11.997191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:56.161 [2024-04-18 19:31:11.997268] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:56.161 [2024-04-18 19:31:11.997409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:56.161 [2024-04-18 19:31:11.999982] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:56.161 [2024-04-18 19:31:12.000163] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:56.161 spare 00:37:56.162 19:31:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:37:56.431 [2024-04-18 19:31:12.233142] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:56.431 [2024-04-18 19:31:12.235445] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:56.431 [2024-04-18 19:31:12.235756] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:37:56.431 [2024-04-18 19:31:12.235859] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:56.431 [2024-04-18 19:31:12.236023] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:37:56.431 [2024-04-18 19:31:12.236526] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:37:56.431 [2024-04-18 19:31:12.236652] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:37:56.431 [2024-04-18 19:31:12.236883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:56.431 19:31:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.703 19:31:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:56.703 "name": "raid_bdev1", 00:37:56.703 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:37:56.703 "strip_size_kb": 0, 00:37:56.703 "state": "online", 00:37:56.703 "raid_level": "raid1", 00:37:56.703 "superblock": true, 00:37:56.703 "num_base_bdevs": 2, 00:37:56.703 "num_base_bdevs_discovered": 2, 00:37:56.703 "num_base_bdevs_operational": 2, 00:37:56.703 "base_bdevs_list": [ 00:37:56.703 { 00:37:56.703 "name": "BaseBdev1", 00:37:56.703 "uuid": "fa7e3425-0b87-5de5-b96d-400840c7961f", 00:37:56.703 "is_configured": true, 00:37:56.703 "data_offset": 2048, 00:37:56.703 "data_size": 63488 00:37:56.703 }, 00:37:56.703 { 00:37:56.703 "name": "BaseBdev2", 00:37:56.703 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:37:56.703 "is_configured": true, 00:37:56.703 "data_offset": 2048, 00:37:56.703 "data_size": 63488 00:37:56.703 } 00:37:56.703 ] 00:37:56.703 }' 00:37:56.703 19:31:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:56.703 19:31:12 -- common/autotest_common.sh@10 -- # set +x 00:37:57.301 19:31:13 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:37:57.301 19:31:13 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:57.563 [2024-04-18 19:31:13.357620] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:57.564 19:31:13 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:37:57.564 19:31:13 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:57.564 19:31:13 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:57.822 19:31:13 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:37:57.822 19:31:13 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:37:57.822 19:31:13 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:37:57.822 19:31:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:58.098 [2024-04-18 19:31:13.761841] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:37:58.098 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:58.098 Zero copy mechanism will not be used. 00:37:58.098 Running I/O for 60 seconds... 00:37:58.098 [2024-04-18 19:31:13.843498] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:58.098 [2024-04-18 19:31:13.855940] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:58.098 19:31:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:58.357 19:31:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:58.357 "name": "raid_bdev1", 00:37:58.357 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:37:58.357 "strip_size_kb": 0, 00:37:58.357 "state": "online", 00:37:58.357 "raid_level": "raid1", 00:37:58.357 "superblock": true, 00:37:58.357 "num_base_bdevs": 2, 00:37:58.357 "num_base_bdevs_discovered": 1, 00:37:58.357 "num_base_bdevs_operational": 1, 00:37:58.357 "base_bdevs_list": [ 00:37:58.357 { 00:37:58.357 "name": null, 00:37:58.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:58.357 "is_configured": false, 00:37:58.357 "data_offset": 2048, 00:37:58.357 "data_size": 63488 00:37:58.357 }, 00:37:58.357 { 00:37:58.357 "name": "BaseBdev2", 00:37:58.357 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:37:58.357 "is_configured": true, 00:37:58.357 "data_offset": 2048, 00:37:58.357 "data_size": 63488 00:37:58.357 } 00:37:58.357 ] 00:37:58.357 }' 00:37:58.357 19:31:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:58.357 19:31:14 -- common/autotest_common.sh@10 -- # set +x 00:37:59.293 19:31:14 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:59.293 [2024-04-18 19:31:15.122568] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:37:59.293 [2024-04-18 19:31:15.122853] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:59.293 19:31:15 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:37:59.293 [2024-04-18 19:31:15.188663] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:37:59.293 [2024-04-18 19:31:15.191007] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:59.551 [2024-04-18 19:31:15.312315] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:59.551 [2024-04-18 19:31:15.313101] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:59.809 [2024-04-18 19:31:15.547812] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:00.068 [2024-04-18 19:31:15.795301] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:38:00.068 [2024-04-18 19:31:15.905238] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:38:00.326 19:31:16 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:00.326 19:31:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:00.326 19:31:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:00.326 19:31:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:00.326 19:31:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:00.326 19:31:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:00.326 19:31:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.584 [2024-04-18 19:31:16.267639] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:38:00.584 19:31:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:00.584 "name": "raid_bdev1", 00:38:00.584 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:00.584 "strip_size_kb": 0, 00:38:00.584 "state": "online", 00:38:00.584 "raid_level": "raid1", 00:38:00.584 "superblock": true, 00:38:00.584 "num_base_bdevs": 2, 00:38:00.584 "num_base_bdevs_discovered": 2, 00:38:00.584 "num_base_bdevs_operational": 2, 00:38:00.584 "process": { 00:38:00.584 "type": "rebuild", 00:38:00.584 "target": "spare", 00:38:00.584 "progress": { 00:38:00.584 "blocks": 14336, 00:38:00.584 "percent": 22 00:38:00.584 } 00:38:00.584 }, 00:38:00.584 "base_bdevs_list": [ 00:38:00.584 { 00:38:00.584 "name": "spare", 00:38:00.584 "uuid": "d3db9852-e72d-587f-a518-eab034fb7f3d", 00:38:00.584 "is_configured": true, 00:38:00.584 "data_offset": 2048, 00:38:00.584 "data_size": 63488 00:38:00.584 }, 00:38:00.584 { 00:38:00.584 "name": "BaseBdev2", 00:38:00.584 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:00.584 "is_configured": true, 00:38:00.584 "data_offset": 2048, 00:38:00.584 "data_size": 63488 00:38:00.584 } 00:38:00.584 ] 00:38:00.584 }' 00:38:00.584 19:31:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:00.584 19:31:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:00.584 19:31:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:00.842 [2024-04-18 19:31:16.512329] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:38:00.842 19:31:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:38:00.842 19:31:16 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:01.100 [2024-04-18 19:31:16.769680] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:01.100 [2024-04-18 19:31:16.946368] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:01.100 [2024-04-18 19:31:16.955659] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:01.100 [2024-04-18 19:31:16.999578] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:01.358 "name": "raid_bdev1", 00:38:01.358 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:01.358 "strip_size_kb": 0, 00:38:01.358 "state": "online", 00:38:01.358 "raid_level": "raid1", 00:38:01.358 "superblock": true, 00:38:01.358 "num_base_bdevs": 2, 00:38:01.358 "num_base_bdevs_discovered": 1, 00:38:01.358 "num_base_bdevs_operational": 1, 00:38:01.358 "base_bdevs_list": [ 00:38:01.358 { 00:38:01.358 "name": null, 00:38:01.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:01.358 "is_configured": false, 00:38:01.358 "data_offset": 2048, 00:38:01.358 "data_size": 63488 00:38:01.358 }, 00:38:01.358 { 00:38:01.358 "name": "BaseBdev2", 00:38:01.358 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:01.358 "is_configured": true, 00:38:01.358 "data_offset": 2048, 00:38:01.358 "data_size": 63488 00:38:01.358 } 00:38:01.358 ] 00:38:01.358 }' 00:38:01.358 19:31:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:01.358 19:31:17 -- common/autotest_common.sh@10 -- # set +x 00:38:02.293 19:31:17 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:02.293 19:31:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:02.293 19:31:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:38:02.293 19:31:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:38:02.293 19:31:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:02.293 19:31:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:02.293 19:31:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:02.552 19:31:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:02.552 "name": "raid_bdev1", 00:38:02.552 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:02.552 "strip_size_kb": 0, 00:38:02.552 "state": "online", 00:38:02.552 "raid_level": "raid1", 00:38:02.552 "superblock": true, 00:38:02.552 "num_base_bdevs": 2, 00:38:02.552 "num_base_bdevs_discovered": 1, 00:38:02.552 "num_base_bdevs_operational": 1, 00:38:02.552 "base_bdevs_list": [ 00:38:02.552 { 00:38:02.552 "name": null, 00:38:02.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:02.552 "is_configured": false, 00:38:02.552 "data_offset": 2048, 00:38:02.552 "data_size": 63488 00:38:02.552 }, 00:38:02.552 { 00:38:02.552 "name": "BaseBdev2", 00:38:02.552 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:02.552 "is_configured": true, 00:38:02.552 "data_offset": 2048, 00:38:02.552 "data_size": 63488 00:38:02.552 } 00:38:02.552 ] 00:38:02.552 }' 00:38:02.552 19:31:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:02.552 19:31:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:02.552 19:31:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:02.552 19:31:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:38:02.552 19:31:18 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:02.811 [2024-04-18 19:31:18.641475] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:38:02.811 [2024-04-18 19:31:18.641733] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:02.811 19:31:18 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:38:02.811 [2024-04-18 19:31:18.709116] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:02.811 [2024-04-18 19:31:18.711280] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:03.069 [2024-04-18 19:31:18.826499] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:03.069 [2024-04-18 19:31:18.827295] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:03.327 [2024-04-18 19:31:19.049970] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:03.327 [2024-04-18 19:31:19.050442] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:03.585 [2024-04-18 19:31:19.401550] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:38:03.844 [2024-04-18 19:31:19.619955] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:38:03.844 [2024-04-18 19:31:19.620493] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:38:03.844 19:31:19 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:03.845 19:31:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:03.845 19:31:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:03.845 19:31:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:03.845 19:31:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:03.845 19:31:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:03.845 19:31:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.103 [2024-04-18 19:31:19.955158] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:04.362 "name": "raid_bdev1", 00:38:04.362 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:04.362 "strip_size_kb": 0, 00:38:04.362 "state": "online", 00:38:04.362 "raid_level": "raid1", 00:38:04.362 "superblock": true, 00:38:04.362 "num_base_bdevs": 2, 00:38:04.362 "num_base_bdevs_discovered": 2, 00:38:04.362 "num_base_bdevs_operational": 2, 00:38:04.362 "process": { 00:38:04.362 "type": "rebuild", 00:38:04.362 "target": "spare", 00:38:04.362 "progress": { 00:38:04.362 "blocks": 14336, 00:38:04.362 "percent": 22 00:38:04.362 } 00:38:04.362 }, 00:38:04.362 "base_bdevs_list": [ 00:38:04.362 { 00:38:04.362 "name": "spare", 00:38:04.362 "uuid": "d3db9852-e72d-587f-a518-eab034fb7f3d", 00:38:04.362 "is_configured": true, 00:38:04.362 "data_offset": 2048, 00:38:04.362 "data_size": 63488 00:38:04.362 }, 00:38:04.362 { 00:38:04.362 "name": "BaseBdev2", 00:38:04.362 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:04.362 "is_configured": true, 00:38:04.362 "data_offset": 2048, 00:38:04.362 "data_size": 63488 00:38:04.362 } 00:38:04.362 ] 00:38:04.362 }' 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:04.362 [2024-04-18 19:31:20.094143] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:38:04.362 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@657 -- # local timeout=525 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:04.362 19:31:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.621 19:31:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:04.621 "name": "raid_bdev1", 00:38:04.621 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:04.621 "strip_size_kb": 0, 00:38:04.621 "state": "online", 00:38:04.621 "raid_level": "raid1", 00:38:04.621 "superblock": true, 00:38:04.621 "num_base_bdevs": 2, 00:38:04.621 "num_base_bdevs_discovered": 2, 00:38:04.621 "num_base_bdevs_operational": 2, 00:38:04.621 "process": { 00:38:04.621 "type": "rebuild", 00:38:04.621 "target": "spare", 00:38:04.621 "progress": { 00:38:04.621 "blocks": 18432, 00:38:04.621 "percent": 29 00:38:04.621 } 00:38:04.621 }, 00:38:04.621 "base_bdevs_list": [ 00:38:04.621 { 00:38:04.621 "name": "spare", 00:38:04.621 "uuid": "d3db9852-e72d-587f-a518-eab034fb7f3d", 00:38:04.621 "is_configured": true, 00:38:04.621 "data_offset": 2048, 00:38:04.621 "data_size": 63488 00:38:04.621 }, 00:38:04.621 { 00:38:04.621 "name": "BaseBdev2", 00:38:04.621 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:04.622 "is_configured": true, 00:38:04.622 "data_offset": 2048, 00:38:04.622 "data_size": 63488 00:38:04.622 } 00:38:04.622 ] 00:38:04.622 }' 00:38:04.622 19:31:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:04.622 19:31:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:04.622 19:31:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:04.622 19:31:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:38:04.622 19:31:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:38:04.881 [2024-04-18 19:31:20.575746] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:38:05.141 [2024-04-18 19:31:20.913321] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:38:05.141 [2024-04-18 19:31:20.914091] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:38:05.141 [2024-04-18 19:31:21.038694] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:38:05.710 19:31:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:38:05.710 19:31:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:05.710 19:31:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:05.710 19:31:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:05.710 19:31:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:05.710 19:31:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:05.710 19:31:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:05.710 19:31:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:05.710 [2024-04-18 19:31:21.578890] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:38:05.970 [2024-04-18 19:31:21.703164] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:38:05.970 19:31:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:05.970 "name": "raid_bdev1", 00:38:05.970 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:05.970 "strip_size_kb": 0, 00:38:05.970 "state": "online", 00:38:05.970 "raid_level": "raid1", 00:38:05.970 "superblock": true, 00:38:05.970 "num_base_bdevs": 2, 00:38:05.970 "num_base_bdevs_discovered": 2, 00:38:05.970 "num_base_bdevs_operational": 2, 00:38:05.970 "process": { 00:38:05.970 "type": "rebuild", 00:38:05.970 "target": "spare", 00:38:05.970 "progress": { 00:38:05.970 "blocks": 40960, 00:38:05.970 "percent": 64 00:38:05.970 } 00:38:05.970 }, 00:38:05.970 "base_bdevs_list": [ 00:38:05.970 { 00:38:05.970 "name": "spare", 00:38:05.970 "uuid": "d3db9852-e72d-587f-a518-eab034fb7f3d", 00:38:05.970 "is_configured": true, 00:38:05.970 "data_offset": 2048, 00:38:05.970 "data_size": 63488 00:38:05.970 }, 00:38:05.970 { 00:38:05.970 "name": "BaseBdev2", 00:38:05.970 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:05.970 "is_configured": true, 00:38:05.970 "data_offset": 2048, 00:38:05.970 "data_size": 63488 00:38:05.970 } 00:38:05.970 ] 00:38:05.970 }' 00:38:05.970 19:31:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:05.970 19:31:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:05.970 19:31:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:05.970 19:31:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:38:05.970 19:31:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:38:06.229 [2024-04-18 19:31:22.032400] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:38:07.167 19:31:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:38:07.167 19:31:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:07.167 19:31:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:07.167 19:31:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:07.167 19:31:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:07.167 19:31:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:07.167 19:31:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:07.167 19:31:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:07.167 [2024-04-18 19:31:22.911791] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:07.167 [2024-04-18 19:31:23.018203] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:07.167 [2024-04-18 19:31:23.021250] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:07.429 "name": "raid_bdev1", 00:38:07.429 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:07.429 "strip_size_kb": 0, 00:38:07.429 "state": "online", 00:38:07.429 "raid_level": "raid1", 00:38:07.429 "superblock": true, 00:38:07.429 "num_base_bdevs": 2, 00:38:07.429 "num_base_bdevs_discovered": 2, 00:38:07.429 "num_base_bdevs_operational": 2, 00:38:07.429 "base_bdevs_list": [ 00:38:07.429 { 00:38:07.429 "name": "spare", 00:38:07.429 "uuid": "d3db9852-e72d-587f-a518-eab034fb7f3d", 00:38:07.429 "is_configured": true, 00:38:07.429 "data_offset": 2048, 00:38:07.429 "data_size": 63488 00:38:07.429 }, 00:38:07.429 { 00:38:07.429 "name": "BaseBdev2", 00:38:07.429 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:07.429 "is_configured": true, 00:38:07.429 "data_offset": 2048, 00:38:07.429 "data_size": 63488 00:38:07.429 } 00:38:07.429 ] 00:38:07.429 }' 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@660 -- # break 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:07.429 19:31:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:07.691 19:31:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:07.691 "name": "raid_bdev1", 00:38:07.691 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:07.691 "strip_size_kb": 0, 00:38:07.691 "state": "online", 00:38:07.691 "raid_level": "raid1", 00:38:07.691 "superblock": true, 00:38:07.691 "num_base_bdevs": 2, 00:38:07.691 "num_base_bdevs_discovered": 2, 00:38:07.691 "num_base_bdevs_operational": 2, 00:38:07.691 "base_bdevs_list": [ 00:38:07.691 { 00:38:07.691 "name": "spare", 00:38:07.691 "uuid": "d3db9852-e72d-587f-a518-eab034fb7f3d", 00:38:07.691 "is_configured": true, 00:38:07.691 "data_offset": 2048, 00:38:07.691 "data_size": 63488 00:38:07.691 }, 00:38:07.691 { 00:38:07.691 "name": "BaseBdev2", 00:38:07.691 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:07.691 "is_configured": true, 00:38:07.691 "data_offset": 2048, 00:38:07.691 "data_size": 63488 00:38:07.691 } 00:38:07.691 ] 00:38:07.691 }' 00:38:07.691 19:31:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:07.691 19:31:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:07.691 19:31:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:07.950 19:31:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:08.208 19:31:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:08.208 "name": "raid_bdev1", 00:38:08.208 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:08.208 "strip_size_kb": 0, 00:38:08.208 "state": "online", 00:38:08.208 "raid_level": "raid1", 00:38:08.208 "superblock": true, 00:38:08.208 "num_base_bdevs": 2, 00:38:08.208 "num_base_bdevs_discovered": 2, 00:38:08.208 "num_base_bdevs_operational": 2, 00:38:08.208 "base_bdevs_list": [ 00:38:08.208 { 00:38:08.208 "name": "spare", 00:38:08.208 "uuid": "d3db9852-e72d-587f-a518-eab034fb7f3d", 00:38:08.208 "is_configured": true, 00:38:08.208 "data_offset": 2048, 00:38:08.208 "data_size": 63488 00:38:08.208 }, 00:38:08.208 { 00:38:08.208 "name": "BaseBdev2", 00:38:08.208 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:08.208 "is_configured": true, 00:38:08.208 "data_offset": 2048, 00:38:08.208 "data_size": 63488 00:38:08.208 } 00:38:08.208 ] 00:38:08.208 }' 00:38:08.208 19:31:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:08.208 19:31:23 -- common/autotest_common.sh@10 -- # set +x 00:38:09.144 19:31:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:09.144 [2024-04-18 19:31:24.978227] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:09.144 [2024-04-18 19:31:24.978443] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:09.144 00:38:09.144 Latency(us) 00:38:09.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:09.144 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:38:09.144 raid_bdev1 : 11.26 111.48 334.43 0.00 0.00 11997.54 431.06 119337.94 00:38:09.144 =================================================================================================================== 00:38:09.144 Total : 111.48 334.43 0.00 0.00 11997.54 431.06 119337.94 00:38:09.144 [2024-04-18 19:31:25.046204] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:09.144 [2024-04-18 19:31:25.046375] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:09.144 [2024-04-18 19:31:25.046495] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:09.144 [2024-04-18 19:31:25.046573] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:38:09.144 0 00:38:09.144 19:31:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:09.144 19:31:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:38:09.714 19:31:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:38:09.714 19:31:25 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:38:09.714 19:31:25 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@12 -- # local i 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:38:09.714 /dev/nbd0 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:09.714 19:31:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:09.714 19:31:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:38:09.714 19:31:25 -- common/autotest_common.sh@855 -- # local i 00:38:09.714 19:31:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:38:09.714 19:31:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:38:09.714 19:31:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:38:09.973 19:31:25 -- common/autotest_common.sh@859 -- # break 00:38:09.973 19:31:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:09.973 19:31:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:09.973 19:31:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:09.973 1+0 records in 00:38:09.973 1+0 records out 00:38:09.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328647 s, 12.5 MB/s 00:38:09.973 19:31:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:09.973 19:31:25 -- common/autotest_common.sh@872 -- # size=4096 00:38:09.973 19:31:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:09.973 19:31:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:38:09.973 19:31:25 -- common/autotest_common.sh@875 -- # return 0 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:09.973 19:31:25 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:38:09.973 19:31:25 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:38:09.973 19:31:25 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@12 -- # local i 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:38:09.973 /dev/nbd1 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:09.973 19:31:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:38:09.973 19:31:25 -- common/autotest_common.sh@855 -- # local i 00:38:09.973 19:31:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:38:09.973 19:31:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:38:09.973 19:31:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:38:09.973 19:31:25 -- common/autotest_common.sh@859 -- # break 00:38:09.973 19:31:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:09.973 19:31:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:09.973 19:31:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:09.973 1+0 records in 00:38:09.973 1+0 records out 00:38:09.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386393 s, 10.6 MB/s 00:38:09.973 19:31:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:09.973 19:31:25 -- common/autotest_common.sh@872 -- # size=4096 00:38:09.973 19:31:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:09.973 19:31:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:38:09.973 19:31:25 -- common/autotest_common.sh@875 -- # return 0 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:09.973 19:31:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:09.973 19:31:25 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:10.233 19:31:26 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:38:10.233 19:31:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:10.233 19:31:26 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:10.233 19:31:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:10.233 19:31:26 -- bdev/nbd_common.sh@51 -- # local i 00:38:10.233 19:31:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:10.233 19:31:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:38:10.492 19:31:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:10.492 19:31:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:10.492 19:31:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:10.492 19:31:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:10.492 19:31:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:10.492 19:31:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:10.492 19:31:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@41 -- # break 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@45 -- # return 0 00:38:10.751 19:31:26 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@51 -- # local i 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:10.751 19:31:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@41 -- # break 00:38:11.009 19:31:26 -- bdev/nbd_common.sh@45 -- # return 0 00:38:11.009 19:31:26 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:38:11.009 19:31:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:38:11.009 19:31:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:38:11.009 19:31:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:38:11.326 19:31:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:11.585 [2024-04-18 19:31:27.402949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:11.585 [2024-04-18 19:31:27.403234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:11.585 [2024-04-18 19:31:27.403299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:11.585 [2024-04-18 19:31:27.403431] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:11.585 [2024-04-18 19:31:27.405745] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:11.585 [2024-04-18 19:31:27.405925] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:11.585 [2024-04-18 19:31:27.406123] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:11.585 [2024-04-18 19:31:27.406266] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:11.585 BaseBdev1 00:38:11.585 19:31:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:38:11.585 19:31:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:38:11.585 19:31:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:38:11.844 19:31:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:12.102 [2024-04-18 19:31:27.971151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:12.102 [2024-04-18 19:31:27.971431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:12.102 [2024-04-18 19:31:27.971496] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:38:12.102 [2024-04-18 19:31:27.971670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:12.102 [2024-04-18 19:31:27.972151] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:12.102 [2024-04-18 19:31:27.972309] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:12.102 [2024-04-18 19:31:27.972537] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:38:12.102 [2024-04-18 19:31:27.972628] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:38:12.102 [2024-04-18 19:31:27.972697] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:12.102 [2024-04-18 19:31:27.972778] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:38:12.102 [2024-04-18 19:31:27.972916] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:12.102 BaseBdev2 00:38:12.102 19:31:27 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:12.431 19:31:28 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:12.698 [2024-04-18 19:31:28.403310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:12.698 [2024-04-18 19:31:28.403583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:12.698 [2024-04-18 19:31:28.403656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:38:12.698 [2024-04-18 19:31:28.403768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:12.698 [2024-04-18 19:31:28.404338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:12.698 [2024-04-18 19:31:28.404490] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:12.698 [2024-04-18 19:31:28.404693] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:38:12.698 [2024-04-18 19:31:28.404810] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:12.698 spare 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:12.698 19:31:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:12.698 [2024-04-18 19:31:28.504944] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:38:12.698 [2024-04-18 19:31:28.505145] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:38:12.698 [2024-04-18 19:31:28.505341] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d080 00:38:12.698 [2024-04-18 19:31:28.505860] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:38:12.698 [2024-04-18 19:31:28.505973] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:38:12.698 [2024-04-18 19:31:28.506203] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:12.957 19:31:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:12.957 "name": "raid_bdev1", 00:38:12.957 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:12.957 "strip_size_kb": 0, 00:38:12.957 "state": "online", 00:38:12.957 "raid_level": "raid1", 00:38:12.957 "superblock": true, 00:38:12.957 "num_base_bdevs": 2, 00:38:12.957 "num_base_bdevs_discovered": 2, 00:38:12.957 "num_base_bdevs_operational": 2, 00:38:12.957 "base_bdevs_list": [ 00:38:12.957 { 00:38:12.957 "name": "spare", 00:38:12.957 "uuid": "d3db9852-e72d-587f-a518-eab034fb7f3d", 00:38:12.957 "is_configured": true, 00:38:12.957 "data_offset": 2048, 00:38:12.957 "data_size": 63488 00:38:12.957 }, 00:38:12.957 { 00:38:12.957 "name": "BaseBdev2", 00:38:12.957 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:12.957 "is_configured": true, 00:38:12.957 "data_offset": 2048, 00:38:12.957 "data_size": 63488 00:38:12.957 } 00:38:12.957 ] 00:38:12.957 }' 00:38:12.957 19:31:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:12.957 19:31:28 -- common/autotest_common.sh@10 -- # set +x 00:38:13.524 19:31:29 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:13.524 19:31:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:13.524 19:31:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:38:13.524 19:31:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:38:13.524 19:31:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:13.783 19:31:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:13.783 19:31:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:14.042 19:31:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:14.042 "name": "raid_bdev1", 00:38:14.042 "uuid": "8703ebc9-708c-4d1b-bead-dd86df3c7b5b", 00:38:14.042 "strip_size_kb": 0, 00:38:14.042 "state": "online", 00:38:14.042 "raid_level": "raid1", 00:38:14.042 "superblock": true, 00:38:14.042 "num_base_bdevs": 2, 00:38:14.042 "num_base_bdevs_discovered": 2, 00:38:14.042 "num_base_bdevs_operational": 2, 00:38:14.042 "base_bdevs_list": [ 00:38:14.042 { 00:38:14.042 "name": "spare", 00:38:14.042 "uuid": "d3db9852-e72d-587f-a518-eab034fb7f3d", 00:38:14.042 "is_configured": true, 00:38:14.042 "data_offset": 2048, 00:38:14.042 "data_size": 63488 00:38:14.042 }, 00:38:14.042 { 00:38:14.042 "name": "BaseBdev2", 00:38:14.042 "uuid": "dd87f3ef-3567-515c-91c9-3140a8f76cd7", 00:38:14.042 "is_configured": true, 00:38:14.042 "data_offset": 2048, 00:38:14.042 "data_size": 63488 00:38:14.042 } 00:38:14.042 ] 00:38:14.042 }' 00:38:14.042 19:31:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:14.042 19:31:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:14.042 19:31:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:14.042 19:31:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:38:14.042 19:31:29 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:14.042 19:31:29 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:14.299 19:31:30 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:38:14.299 19:31:30 -- bdev/bdev_raid.sh@709 -- # killprocess 134362 00:38:14.299 19:31:30 -- common/autotest_common.sh@936 -- # '[' -z 134362 ']' 00:38:14.299 19:31:30 -- common/autotest_common.sh@940 -- # kill -0 134362 00:38:14.299 19:31:30 -- common/autotest_common.sh@941 -- # uname 00:38:14.299 19:31:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:38:14.299 19:31:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134362 00:38:14.299 killing process with pid 134362 00:38:14.299 Received shutdown signal, test time was about 16.411872 seconds 00:38:14.299 00:38:14.299 Latency(us) 00:38:14.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.299 =================================================================================================================== 00:38:14.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:14.300 19:31:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:38:14.300 19:31:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:38:14.300 19:31:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134362' 00:38:14.300 19:31:30 -- common/autotest_common.sh@955 -- # kill 134362 00:38:14.300 19:31:30 -- common/autotest_common.sh@960 -- # wait 134362 00:38:14.300 [2024-04-18 19:31:30.176171] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:14.300 [2024-04-18 19:31:30.176253] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:14.300 [2024-04-18 19:31:30.176317] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:14.300 [2024-04-18 19:31:30.176371] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:38:14.557 [2024-04-18 19:31:30.441009] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:16.459 ************************************ 00:38:16.459 END TEST raid_rebuild_test_sb_io 00:38:16.459 ************************************ 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@711 -- # return 0 00:38:16.459 00:38:16.459 real 0m22.866s 00:38:16.459 user 0m36.571s 00:38:16.459 sys 0m2.682s 00:38:16.459 19:31:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:16.459 19:31:32 -- common/autotest_common.sh@10 -- # set +x 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:38:16.459 19:31:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:38:16.459 19:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:16.459 19:31:32 -- common/autotest_common.sh@10 -- # set +x 00:38:16.459 ************************************ 00:38:16.459 START TEST raid_rebuild_test 00:38:16.459 ************************************ 00:38:16.459 19:31:32 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 false false 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@544 -- # raid_pid=134991 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134991 /var/tmp/spdk-raid.sock 00:38:16.459 19:31:32 -- common/autotest_common.sh@817 -- # '[' -z 134991 ']' 00:38:16.459 19:31:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:16.459 19:31:32 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:16.459 19:31:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:38:16.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:16.459 19:31:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:16.459 19:31:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:38:16.459 19:31:32 -- common/autotest_common.sh@10 -- # set +x 00:38:16.459 [2024-04-18 19:31:32.171053] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:38:16.459 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:16.459 Zero copy mechanism will not be used. 00:38:16.459 [2024-04-18 19:31:32.171255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134991 ] 00:38:16.459 [2024-04-18 19:31:32.341534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.717 [2024-04-18 19:31:32.568206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.977 [2024-04-18 19:31:32.809156] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:17.234 19:31:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:38:17.234 19:31:33 -- common/autotest_common.sh@850 -- # return 0 00:38:17.234 19:31:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:38:17.235 19:31:33 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:38:17.235 19:31:33 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:38:17.492 BaseBdev1 00:38:17.492 19:31:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:38:17.492 19:31:33 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:38:17.492 19:31:33 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:38:18.059 BaseBdev2 00:38:18.059 19:31:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:38:18.059 19:31:33 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:38:18.059 19:31:33 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:38:18.316 BaseBdev3 00:38:18.316 19:31:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:38:18.316 19:31:34 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:38:18.316 19:31:34 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:38:18.574 BaseBdev4 00:38:18.574 19:31:34 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:38:18.832 spare_malloc 00:38:18.833 19:31:34 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:19.091 spare_delay 00:38:19.091 19:31:34 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:19.348 [2024-04-18 19:31:35.071776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:19.348 [2024-04-18 19:31:35.071876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:19.348 [2024-04-18 19:31:35.071909] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:19.348 [2024-04-18 19:31:35.071955] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:19.348 [2024-04-18 19:31:35.074520] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:19.348 [2024-04-18 19:31:35.074571] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:19.348 spare 00:38:19.348 19:31:35 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:38:19.605 [2024-04-18 19:31:35.291920] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:19.605 [2024-04-18 19:31:35.294132] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:19.605 [2024-04-18 19:31:35.294202] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:19.605 [2024-04-18 19:31:35.294238] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:19.605 [2024-04-18 19:31:35.294320] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:38:19.605 [2024-04-18 19:31:35.294331] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:38:19.605 [2024-04-18 19:31:35.294516] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:38:19.605 [2024-04-18 19:31:35.294862] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:38:19.605 [2024-04-18 19:31:35.294883] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:38:19.606 [2024-04-18 19:31:35.295078] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:19.606 19:31:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:19.863 19:31:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:19.863 "name": "raid_bdev1", 00:38:19.863 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:19.863 "strip_size_kb": 0, 00:38:19.863 "state": "online", 00:38:19.863 "raid_level": "raid1", 00:38:19.863 "superblock": false, 00:38:19.863 "num_base_bdevs": 4, 00:38:19.863 "num_base_bdevs_discovered": 4, 00:38:19.863 "num_base_bdevs_operational": 4, 00:38:19.863 "base_bdevs_list": [ 00:38:19.863 { 00:38:19.863 "name": "BaseBdev1", 00:38:19.863 "uuid": "d9b28012-0826-4a3a-a6e4-369cad640bf1", 00:38:19.863 "is_configured": true, 00:38:19.863 "data_offset": 0, 00:38:19.863 "data_size": 65536 00:38:19.863 }, 00:38:19.863 { 00:38:19.863 "name": "BaseBdev2", 00:38:19.863 "uuid": "a242174d-921c-4e58-a8f7-70ce84b94acf", 00:38:19.863 "is_configured": true, 00:38:19.863 "data_offset": 0, 00:38:19.863 "data_size": 65536 00:38:19.863 }, 00:38:19.863 { 00:38:19.863 "name": "BaseBdev3", 00:38:19.863 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:19.863 "is_configured": true, 00:38:19.863 "data_offset": 0, 00:38:19.863 "data_size": 65536 00:38:19.863 }, 00:38:19.863 { 00:38:19.863 "name": "BaseBdev4", 00:38:19.863 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:19.863 "is_configured": true, 00:38:19.863 "data_offset": 0, 00:38:19.863 "data_size": 65536 00:38:19.863 } 00:38:19.863 ] 00:38:19.863 }' 00:38:19.863 19:31:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:19.863 19:31:35 -- common/autotest_common.sh@10 -- # set +x 00:38:20.430 19:31:36 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:38:20.430 19:31:36 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:20.689 [2024-04-18 19:31:36.532391] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:20.689 19:31:36 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:38:20.689 19:31:36 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:20.689 19:31:36 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:20.948 19:31:36 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:38:20.948 19:31:36 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:38:20.948 19:31:36 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:38:20.948 19:31:36 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:38:20.948 19:31:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:20.948 19:31:36 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:20.948 19:31:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:20.948 19:31:36 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:20.948 19:31:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:20.948 19:31:36 -- bdev/nbd_common.sh@12 -- # local i 00:38:20.948 19:31:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:20.948 19:31:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:20.948 19:31:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:21.280 [2024-04-18 19:31:37.060208] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:38:21.280 /dev/nbd0 00:38:21.280 19:31:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:21.280 19:31:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:21.280 19:31:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:38:21.280 19:31:37 -- common/autotest_common.sh@855 -- # local i 00:38:21.280 19:31:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:38:21.280 19:31:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:38:21.280 19:31:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:38:21.280 19:31:37 -- common/autotest_common.sh@859 -- # break 00:38:21.280 19:31:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:21.280 19:31:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:21.280 19:31:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:21.280 1+0 records in 00:38:21.280 1+0 records out 00:38:21.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237346 s, 17.3 MB/s 00:38:21.280 19:31:37 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:21.280 19:31:37 -- common/autotest_common.sh@872 -- # size=4096 00:38:21.280 19:31:37 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:21.280 19:31:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:38:21.280 19:31:37 -- common/autotest_common.sh@875 -- # return 0 00:38:21.280 19:31:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:21.280 19:31:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:21.280 19:31:37 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:38:21.280 19:31:37 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:38:21.280 19:31:37 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:38:27.846 65536+0 records in 00:38:27.846 65536+0 records out 00:38:27.846 33554432 bytes (34 MB, 32 MiB) copied, 5.37978 s, 6.2 MB/s 00:38:27.846 19:31:42 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@51 -- # local i 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:27.846 [2024-04-18 19:31:42.801412] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@41 -- # break 00:38:27.846 19:31:42 -- bdev/nbd_common.sh@45 -- # return 0 00:38:27.846 19:31:42 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:38:27.846 [2024-04-18 19:31:43.073113] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:27.846 "name": "raid_bdev1", 00:38:27.846 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:27.846 "strip_size_kb": 0, 00:38:27.846 "state": "online", 00:38:27.846 "raid_level": "raid1", 00:38:27.846 "superblock": false, 00:38:27.846 "num_base_bdevs": 4, 00:38:27.846 "num_base_bdevs_discovered": 3, 00:38:27.846 "num_base_bdevs_operational": 3, 00:38:27.846 "base_bdevs_list": [ 00:38:27.846 { 00:38:27.846 "name": null, 00:38:27.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:27.846 "is_configured": false, 00:38:27.846 "data_offset": 0, 00:38:27.846 "data_size": 65536 00:38:27.846 }, 00:38:27.846 { 00:38:27.846 "name": "BaseBdev2", 00:38:27.846 "uuid": "a242174d-921c-4e58-a8f7-70ce84b94acf", 00:38:27.846 "is_configured": true, 00:38:27.846 "data_offset": 0, 00:38:27.846 "data_size": 65536 00:38:27.846 }, 00:38:27.846 { 00:38:27.846 "name": "BaseBdev3", 00:38:27.846 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:27.846 "is_configured": true, 00:38:27.846 "data_offset": 0, 00:38:27.846 "data_size": 65536 00:38:27.846 }, 00:38:27.846 { 00:38:27.846 "name": "BaseBdev4", 00:38:27.846 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:27.846 "is_configured": true, 00:38:27.846 "data_offset": 0, 00:38:27.846 "data_size": 65536 00:38:27.846 } 00:38:27.846 ] 00:38:27.846 }' 00:38:27.846 19:31:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:27.846 19:31:43 -- common/autotest_common.sh@10 -- # set +x 00:38:28.415 19:31:44 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:28.415 [2024-04-18 19:31:44.269399] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:38:28.415 [2024-04-18 19:31:44.269464] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:28.415 [2024-04-18 19:31:44.288517] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:38:28.415 [2024-04-18 19:31:44.290702] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:28.415 19:31:44 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:29.790 "name": "raid_bdev1", 00:38:29.790 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:29.790 "strip_size_kb": 0, 00:38:29.790 "state": "online", 00:38:29.790 "raid_level": "raid1", 00:38:29.790 "superblock": false, 00:38:29.790 "num_base_bdevs": 4, 00:38:29.790 "num_base_bdevs_discovered": 4, 00:38:29.790 "num_base_bdevs_operational": 4, 00:38:29.790 "process": { 00:38:29.790 "type": "rebuild", 00:38:29.790 "target": "spare", 00:38:29.790 "progress": { 00:38:29.790 "blocks": 24576, 00:38:29.790 "percent": 37 00:38:29.790 } 00:38:29.790 }, 00:38:29.790 "base_bdevs_list": [ 00:38:29.790 { 00:38:29.790 "name": "spare", 00:38:29.790 "uuid": "0731d82f-2544-50f8-a67d-1b4fdad555e3", 00:38:29.790 "is_configured": true, 00:38:29.790 "data_offset": 0, 00:38:29.790 "data_size": 65536 00:38:29.790 }, 00:38:29.790 { 00:38:29.790 "name": "BaseBdev2", 00:38:29.790 "uuid": "a242174d-921c-4e58-a8f7-70ce84b94acf", 00:38:29.790 "is_configured": true, 00:38:29.790 "data_offset": 0, 00:38:29.790 "data_size": 65536 00:38:29.790 }, 00:38:29.790 { 00:38:29.790 "name": "BaseBdev3", 00:38:29.790 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:29.790 "is_configured": true, 00:38:29.790 "data_offset": 0, 00:38:29.790 "data_size": 65536 00:38:29.790 }, 00:38:29.790 { 00:38:29.790 "name": "BaseBdev4", 00:38:29.790 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:29.790 "is_configured": true, 00:38:29.790 "data_offset": 0, 00:38:29.790 "data_size": 65536 00:38:29.790 } 00:38:29.790 ] 00:38:29.790 }' 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:38:29.790 19:31:45 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:30.050 [2024-04-18 19:31:45.896532] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:30.050 [2024-04-18 19:31:45.900175] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:30.050 [2024-04-18 19:31:45.900385] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:30.050 19:31:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.383 19:31:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:30.383 "name": "raid_bdev1", 00:38:30.383 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:30.383 "strip_size_kb": 0, 00:38:30.383 "state": "online", 00:38:30.383 "raid_level": "raid1", 00:38:30.383 "superblock": false, 00:38:30.383 "num_base_bdevs": 4, 00:38:30.383 "num_base_bdevs_discovered": 3, 00:38:30.383 "num_base_bdevs_operational": 3, 00:38:30.383 "base_bdevs_list": [ 00:38:30.383 { 00:38:30.383 "name": null, 00:38:30.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:30.383 "is_configured": false, 00:38:30.383 "data_offset": 0, 00:38:30.383 "data_size": 65536 00:38:30.383 }, 00:38:30.383 { 00:38:30.384 "name": "BaseBdev2", 00:38:30.384 "uuid": "a242174d-921c-4e58-a8f7-70ce84b94acf", 00:38:30.384 "is_configured": true, 00:38:30.384 "data_offset": 0, 00:38:30.384 "data_size": 65536 00:38:30.384 }, 00:38:30.384 { 00:38:30.384 "name": "BaseBdev3", 00:38:30.384 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:30.384 "is_configured": true, 00:38:30.384 "data_offset": 0, 00:38:30.384 "data_size": 65536 00:38:30.384 }, 00:38:30.384 { 00:38:30.384 "name": "BaseBdev4", 00:38:30.384 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:30.384 "is_configured": true, 00:38:30.384 "data_offset": 0, 00:38:30.384 "data_size": 65536 00:38:30.384 } 00:38:30.384 ] 00:38:30.384 }' 00:38:30.384 19:31:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:30.384 19:31:46 -- common/autotest_common.sh@10 -- # set +x 00:38:31.318 19:31:46 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:31.318 19:31:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:31.318 19:31:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:38:31.318 19:31:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:38:31.318 19:31:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:31.318 19:31:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:31.318 19:31:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:31.318 19:31:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:31.318 "name": "raid_bdev1", 00:38:31.318 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:31.318 "strip_size_kb": 0, 00:38:31.318 "state": "online", 00:38:31.318 "raid_level": "raid1", 00:38:31.318 "superblock": false, 00:38:31.318 "num_base_bdevs": 4, 00:38:31.318 "num_base_bdevs_discovered": 3, 00:38:31.318 "num_base_bdevs_operational": 3, 00:38:31.318 "base_bdevs_list": [ 00:38:31.318 { 00:38:31.318 "name": null, 00:38:31.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:31.318 "is_configured": false, 00:38:31.318 "data_offset": 0, 00:38:31.318 "data_size": 65536 00:38:31.318 }, 00:38:31.318 { 00:38:31.318 "name": "BaseBdev2", 00:38:31.318 "uuid": "a242174d-921c-4e58-a8f7-70ce84b94acf", 00:38:31.318 "is_configured": true, 00:38:31.318 "data_offset": 0, 00:38:31.318 "data_size": 65536 00:38:31.318 }, 00:38:31.318 { 00:38:31.318 "name": "BaseBdev3", 00:38:31.318 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:31.318 "is_configured": true, 00:38:31.318 "data_offset": 0, 00:38:31.318 "data_size": 65536 00:38:31.318 }, 00:38:31.318 { 00:38:31.318 "name": "BaseBdev4", 00:38:31.318 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:31.318 "is_configured": true, 00:38:31.318 "data_offset": 0, 00:38:31.318 "data_size": 65536 00:38:31.318 } 00:38:31.318 ] 00:38:31.318 }' 00:38:31.318 19:31:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:31.318 19:31:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:31.318 19:31:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:31.318 19:31:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:38:31.318 19:31:47 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:31.883 [2024-04-18 19:31:47.518142] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:38:31.883 [2024-04-18 19:31:47.518204] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:31.883 [2024-04-18 19:31:47.535629] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b840 00:38:31.883 [2024-04-18 19:31:47.537905] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:31.883 19:31:47 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:38:32.817 19:31:48 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:32.817 19:31:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:32.817 19:31:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:32.817 19:31:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:32.817 19:31:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:32.817 19:31:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:32.817 19:31:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.076 19:31:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:33.076 "name": "raid_bdev1", 00:38:33.076 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:33.076 "strip_size_kb": 0, 00:38:33.076 "state": "online", 00:38:33.076 "raid_level": "raid1", 00:38:33.076 "superblock": false, 00:38:33.076 "num_base_bdevs": 4, 00:38:33.076 "num_base_bdevs_discovered": 4, 00:38:33.076 "num_base_bdevs_operational": 4, 00:38:33.076 "process": { 00:38:33.076 "type": "rebuild", 00:38:33.076 "target": "spare", 00:38:33.076 "progress": { 00:38:33.076 "blocks": 26624, 00:38:33.076 "percent": 40 00:38:33.076 } 00:38:33.076 }, 00:38:33.076 "base_bdevs_list": [ 00:38:33.076 { 00:38:33.076 "name": "spare", 00:38:33.076 "uuid": "0731d82f-2544-50f8-a67d-1b4fdad555e3", 00:38:33.076 "is_configured": true, 00:38:33.076 "data_offset": 0, 00:38:33.076 "data_size": 65536 00:38:33.076 }, 00:38:33.076 { 00:38:33.076 "name": "BaseBdev2", 00:38:33.076 "uuid": "a242174d-921c-4e58-a8f7-70ce84b94acf", 00:38:33.076 "is_configured": true, 00:38:33.076 "data_offset": 0, 00:38:33.076 "data_size": 65536 00:38:33.076 }, 00:38:33.076 { 00:38:33.076 "name": "BaseBdev3", 00:38:33.076 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:33.076 "is_configured": true, 00:38:33.076 "data_offset": 0, 00:38:33.076 "data_size": 65536 00:38:33.076 }, 00:38:33.076 { 00:38:33.076 "name": "BaseBdev4", 00:38:33.076 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:33.076 "is_configured": true, 00:38:33.076 "data_offset": 0, 00:38:33.076 "data_size": 65536 00:38:33.076 } 00:38:33.076 ] 00:38:33.076 }' 00:38:33.076 19:31:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:33.076 19:31:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:33.076 19:31:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:33.334 19:31:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:38:33.335 19:31:49 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:38:33.335 19:31:49 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:38:33.335 19:31:49 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:38:33.335 19:31:49 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:38:33.335 19:31:49 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:38:33.593 [2024-04-18 19:31:49.308385] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:33.593 [2024-04-18 19:31:49.348771] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0b840 00:38:33.593 19:31:49 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:38:33.593 19:31:49 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:38:33.593 19:31:49 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:33.593 19:31:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:33.593 19:31:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:33.593 19:31:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:33.593 19:31:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:33.593 19:31:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:33.593 19:31:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.852 19:31:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:33.852 "name": "raid_bdev1", 00:38:33.852 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:33.852 "strip_size_kb": 0, 00:38:33.852 "state": "online", 00:38:33.852 "raid_level": "raid1", 00:38:33.852 "superblock": false, 00:38:33.852 "num_base_bdevs": 4, 00:38:33.852 "num_base_bdevs_discovered": 3, 00:38:33.852 "num_base_bdevs_operational": 3, 00:38:33.852 "process": { 00:38:33.852 "type": "rebuild", 00:38:33.852 "target": "spare", 00:38:33.852 "progress": { 00:38:33.852 "blocks": 40960, 00:38:33.852 "percent": 62 00:38:33.852 } 00:38:33.852 }, 00:38:33.852 "base_bdevs_list": [ 00:38:33.852 { 00:38:33.852 "name": "spare", 00:38:33.852 "uuid": "0731d82f-2544-50f8-a67d-1b4fdad555e3", 00:38:33.852 "is_configured": true, 00:38:33.852 "data_offset": 0, 00:38:33.852 "data_size": 65536 00:38:33.852 }, 00:38:33.852 { 00:38:33.852 "name": null, 00:38:33.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:33.852 "is_configured": false, 00:38:33.852 "data_offset": 0, 00:38:33.852 "data_size": 65536 00:38:33.852 }, 00:38:33.852 { 00:38:33.852 "name": "BaseBdev3", 00:38:33.852 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:33.852 "is_configured": true, 00:38:33.852 "data_offset": 0, 00:38:33.852 "data_size": 65536 00:38:33.852 }, 00:38:33.852 { 00:38:33.852 "name": "BaseBdev4", 00:38:33.852 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:33.852 "is_configured": true, 00:38:33.852 "data_offset": 0, 00:38:33.852 "data_size": 65536 00:38:33.852 } 00:38:33.852 ] 00:38:33.852 }' 00:38:33.852 19:31:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:33.852 19:31:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:33.852 19:31:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:34.110 19:31:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:38:34.110 19:31:49 -- bdev/bdev_raid.sh@657 -- # local timeout=554 00:38:34.110 19:31:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:38:34.110 19:31:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:34.110 19:31:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:34.110 19:31:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:34.110 19:31:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:34.111 19:31:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:34.111 19:31:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:34.111 19:31:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:34.369 19:31:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:34.369 "name": "raid_bdev1", 00:38:34.369 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:34.369 "strip_size_kb": 0, 00:38:34.369 "state": "online", 00:38:34.369 "raid_level": "raid1", 00:38:34.369 "superblock": false, 00:38:34.369 "num_base_bdevs": 4, 00:38:34.369 "num_base_bdevs_discovered": 3, 00:38:34.369 "num_base_bdevs_operational": 3, 00:38:34.369 "process": { 00:38:34.369 "type": "rebuild", 00:38:34.369 "target": "spare", 00:38:34.369 "progress": { 00:38:34.369 "blocks": 49152, 00:38:34.369 "percent": 75 00:38:34.369 } 00:38:34.369 }, 00:38:34.369 "base_bdevs_list": [ 00:38:34.369 { 00:38:34.369 "name": "spare", 00:38:34.369 "uuid": "0731d82f-2544-50f8-a67d-1b4fdad555e3", 00:38:34.369 "is_configured": true, 00:38:34.369 "data_offset": 0, 00:38:34.369 "data_size": 65536 00:38:34.369 }, 00:38:34.369 { 00:38:34.369 "name": null, 00:38:34.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:34.369 "is_configured": false, 00:38:34.369 "data_offset": 0, 00:38:34.369 "data_size": 65536 00:38:34.369 }, 00:38:34.369 { 00:38:34.369 "name": "BaseBdev3", 00:38:34.369 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:34.369 "is_configured": true, 00:38:34.369 "data_offset": 0, 00:38:34.369 "data_size": 65536 00:38:34.369 }, 00:38:34.369 { 00:38:34.369 "name": "BaseBdev4", 00:38:34.369 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:34.369 "is_configured": true, 00:38:34.369 "data_offset": 0, 00:38:34.369 "data_size": 65536 00:38:34.369 } 00:38:34.369 ] 00:38:34.369 }' 00:38:34.369 19:31:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:34.369 19:31:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:34.369 19:31:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:34.369 19:31:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:38:34.369 19:31:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:38:34.937 [2024-04-18 19:31:50.757255] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:34.937 [2024-04-18 19:31:50.757347] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:34.937 [2024-04-18 19:31:50.757445] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:35.505 "name": "raid_bdev1", 00:38:35.505 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:35.505 "strip_size_kb": 0, 00:38:35.505 "state": "online", 00:38:35.505 "raid_level": "raid1", 00:38:35.505 "superblock": false, 00:38:35.505 "num_base_bdevs": 4, 00:38:35.505 "num_base_bdevs_discovered": 3, 00:38:35.505 "num_base_bdevs_operational": 3, 00:38:35.505 "base_bdevs_list": [ 00:38:35.505 { 00:38:35.505 "name": "spare", 00:38:35.505 "uuid": "0731d82f-2544-50f8-a67d-1b4fdad555e3", 00:38:35.505 "is_configured": true, 00:38:35.505 "data_offset": 0, 00:38:35.505 "data_size": 65536 00:38:35.505 }, 00:38:35.505 { 00:38:35.505 "name": null, 00:38:35.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:35.505 "is_configured": false, 00:38:35.505 "data_offset": 0, 00:38:35.505 "data_size": 65536 00:38:35.505 }, 00:38:35.505 { 00:38:35.505 "name": "BaseBdev3", 00:38:35.505 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:35.505 "is_configured": true, 00:38:35.505 "data_offset": 0, 00:38:35.505 "data_size": 65536 00:38:35.505 }, 00:38:35.505 { 00:38:35.505 "name": "BaseBdev4", 00:38:35.505 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:35.505 "is_configured": true, 00:38:35.505 "data_offset": 0, 00:38:35.505 "data_size": 65536 00:38:35.505 } 00:38:35.505 ] 00:38:35.505 }' 00:38:35.505 19:31:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:35.762 19:31:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@660 -- # break 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:35.763 19:31:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:36.096 "name": "raid_bdev1", 00:38:36.096 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:36.096 "strip_size_kb": 0, 00:38:36.096 "state": "online", 00:38:36.096 "raid_level": "raid1", 00:38:36.096 "superblock": false, 00:38:36.096 "num_base_bdevs": 4, 00:38:36.096 "num_base_bdevs_discovered": 3, 00:38:36.096 "num_base_bdevs_operational": 3, 00:38:36.096 "base_bdevs_list": [ 00:38:36.096 { 00:38:36.096 "name": "spare", 00:38:36.096 "uuid": "0731d82f-2544-50f8-a67d-1b4fdad555e3", 00:38:36.096 "is_configured": true, 00:38:36.096 "data_offset": 0, 00:38:36.096 "data_size": 65536 00:38:36.096 }, 00:38:36.096 { 00:38:36.096 "name": null, 00:38:36.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:36.096 "is_configured": false, 00:38:36.096 "data_offset": 0, 00:38:36.096 "data_size": 65536 00:38:36.096 }, 00:38:36.096 { 00:38:36.096 "name": "BaseBdev3", 00:38:36.096 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:36.096 "is_configured": true, 00:38:36.096 "data_offset": 0, 00:38:36.096 "data_size": 65536 00:38:36.096 }, 00:38:36.096 { 00:38:36.096 "name": "BaseBdev4", 00:38:36.096 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:36.096 "is_configured": true, 00:38:36.096 "data_offset": 0, 00:38:36.096 "data_size": 65536 00:38:36.096 } 00:38:36.096 ] 00:38:36.096 }' 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:36.096 19:31:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:36.663 19:31:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:36.663 "name": "raid_bdev1", 00:38:36.663 "uuid": "ace87cee-0e5d-4441-8e2f-84c3a8b3d998", 00:38:36.663 "strip_size_kb": 0, 00:38:36.663 "state": "online", 00:38:36.663 "raid_level": "raid1", 00:38:36.663 "superblock": false, 00:38:36.663 "num_base_bdevs": 4, 00:38:36.663 "num_base_bdevs_discovered": 3, 00:38:36.663 "num_base_bdevs_operational": 3, 00:38:36.663 "base_bdevs_list": [ 00:38:36.663 { 00:38:36.663 "name": "spare", 00:38:36.663 "uuid": "0731d82f-2544-50f8-a67d-1b4fdad555e3", 00:38:36.663 "is_configured": true, 00:38:36.663 "data_offset": 0, 00:38:36.663 "data_size": 65536 00:38:36.663 }, 00:38:36.663 { 00:38:36.663 "name": null, 00:38:36.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:36.663 "is_configured": false, 00:38:36.663 "data_offset": 0, 00:38:36.663 "data_size": 65536 00:38:36.663 }, 00:38:36.663 { 00:38:36.663 "name": "BaseBdev3", 00:38:36.663 "uuid": "f444b1b2-ec10-4dfb-9f6d-3ee63211d875", 00:38:36.663 "is_configured": true, 00:38:36.663 "data_offset": 0, 00:38:36.663 "data_size": 65536 00:38:36.663 }, 00:38:36.663 { 00:38:36.663 "name": "BaseBdev4", 00:38:36.663 "uuid": "229501c0-a4c9-437f-b70b-c3052c6518ee", 00:38:36.663 "is_configured": true, 00:38:36.663 "data_offset": 0, 00:38:36.663 "data_size": 65536 00:38:36.663 } 00:38:36.663 ] 00:38:36.663 }' 00:38:36.663 19:31:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:36.663 19:31:52 -- common/autotest_common.sh@10 -- # set +x 00:38:37.230 19:31:52 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:37.489 [2024-04-18 19:31:53.255191] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:37.489 [2024-04-18 19:31:53.255234] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:37.489 [2024-04-18 19:31:53.255311] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:37.489 [2024-04-18 19:31:53.255388] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:37.489 [2024-04-18 19:31:53.255399] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:38:37.489 19:31:53 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:37.489 19:31:53 -- bdev/bdev_raid.sh@671 -- # jq length 00:38:37.747 19:31:53 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:38:37.747 19:31:53 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:38:37.747 19:31:53 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:37.747 19:31:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:37.747 19:31:53 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:37.747 19:31:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:37.747 19:31:53 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:37.747 19:31:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:37.747 19:31:53 -- bdev/nbd_common.sh@12 -- # local i 00:38:37.747 19:31:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:37.747 19:31:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:37.747 19:31:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:38.005 /dev/nbd0 00:38:38.005 19:31:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:38.005 19:31:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:38.005 19:31:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:38:38.005 19:31:53 -- common/autotest_common.sh@855 -- # local i 00:38:38.005 19:31:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:38:38.005 19:31:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:38:38.005 19:31:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:38:38.005 19:31:53 -- common/autotest_common.sh@859 -- # break 00:38:38.005 19:31:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:38.005 19:31:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:38.005 19:31:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:38.005 1+0 records in 00:38:38.005 1+0 records out 00:38:38.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380915 s, 10.8 MB/s 00:38:38.005 19:31:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:38.005 19:31:53 -- common/autotest_common.sh@872 -- # size=4096 00:38:38.005 19:31:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:38.005 19:31:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:38:38.005 19:31:53 -- common/autotest_common.sh@875 -- # return 0 00:38:38.006 19:31:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:38.006 19:31:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:38.006 19:31:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:38:38.265 /dev/nbd1 00:38:38.265 19:31:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:38.265 19:31:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:38.265 19:31:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:38:38.265 19:31:54 -- common/autotest_common.sh@855 -- # local i 00:38:38.265 19:31:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:38:38.265 19:31:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:38:38.265 19:31:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:38:38.265 19:31:54 -- common/autotest_common.sh@859 -- # break 00:38:38.265 19:31:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:38.265 19:31:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:38.265 19:31:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:38.265 1+0 records in 00:38:38.265 1+0 records out 00:38:38.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557196 s, 7.4 MB/s 00:38:38.265 19:31:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:38.265 19:31:54 -- common/autotest_common.sh@872 -- # size=4096 00:38:38.265 19:31:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:38.265 19:31:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:38:38.265 19:31:54 -- common/autotest_common.sh@875 -- # return 0 00:38:38.265 19:31:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:38.265 19:31:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:38.265 19:31:54 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:38:38.524 19:31:54 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:38:38.524 19:31:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:38.524 19:31:54 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:38.524 19:31:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:38.524 19:31:54 -- bdev/nbd_common.sh@51 -- # local i 00:38:38.524 19:31:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:38.524 19:31:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@41 -- # break 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@45 -- # return 0 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:38.783 19:31:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:38:39.041 19:31:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:39.041 19:31:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:39.041 19:31:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:39.041 19:31:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:39.041 19:31:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:39.041 19:31:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:39.041 19:31:54 -- bdev/nbd_common.sh@41 -- # break 00:38:39.041 19:31:54 -- bdev/nbd_common.sh@45 -- # return 0 00:38:39.041 19:31:54 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:38:39.041 19:31:54 -- bdev/bdev_raid.sh@709 -- # killprocess 134991 00:38:39.041 19:31:54 -- common/autotest_common.sh@936 -- # '[' -z 134991 ']' 00:38:39.041 19:31:54 -- common/autotest_common.sh@940 -- # kill -0 134991 00:38:39.041 19:31:54 -- common/autotest_common.sh@941 -- # uname 00:38:39.299 19:31:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:38:39.299 19:31:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134991 00:38:39.299 killing process with pid 134991 00:38:39.299 Received shutdown signal, test time was about 60.000000 seconds 00:38:39.299 00:38:39.299 Latency(us) 00:38:39.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.299 =================================================================================================================== 00:38:39.299 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:39.299 19:31:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:38:39.299 19:31:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:38:39.299 19:31:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134991' 00:38:39.299 19:31:54 -- common/autotest_common.sh@955 -- # kill 134991 00:38:39.299 19:31:54 -- common/autotest_common.sh@960 -- # wait 134991 00:38:39.299 [2024-04-18 19:31:54.986301] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:39.867 [2024-04-18 19:31:55.559030] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:41.243 ************************************ 00:38:41.243 END TEST raid_rebuild_test 00:38:41.243 ************************************ 00:38:41.243 19:31:56 -- bdev/bdev_raid.sh@711 -- # return 0 00:38:41.243 00:38:41.243 real 0m24.901s 00:38:41.243 user 0m34.924s 00:38:41.243 sys 0m4.249s 00:38:41.243 19:31:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:41.244 19:31:56 -- common/autotest_common.sh@10 -- # set +x 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:38:41.244 19:31:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:38:41.244 19:31:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:41.244 19:31:57 -- common/autotest_common.sh@10 -- # set +x 00:38:41.244 ************************************ 00:38:41.244 START TEST raid_rebuild_test_sb 00:38:41.244 ************************************ 00:38:41.244 19:31:57 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 true false 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@544 -- # raid_pid=135603 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135603 /var/tmp/spdk-raid.sock 00:38:41.244 19:31:57 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:41.244 19:31:57 -- common/autotest_common.sh@817 -- # '[' -z 135603 ']' 00:38:41.244 19:31:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:41.244 19:31:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:38:41.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:41.244 19:31:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:41.244 19:31:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:38:41.244 19:31:57 -- common/autotest_common.sh@10 -- # set +x 00:38:41.503 [2024-04-18 19:31:57.173135] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:38:41.503 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:41.503 Zero copy mechanism will not be used. 00:38:41.503 [2024-04-18 19:31:57.173351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135603 ] 00:38:41.503 [2024-04-18 19:31:57.362258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.828 [2024-04-18 19:31:57.588691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.113 [2024-04-18 19:31:57.841065] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:42.113 19:31:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:38:42.113 19:31:57 -- common/autotest_common.sh@850 -- # return 0 00:38:42.113 19:31:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:38:42.113 19:31:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:38:42.113 19:31:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:42.371 BaseBdev1_malloc 00:38:42.371 19:31:58 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:42.630 [2024-04-18 19:31:58.519972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:42.630 [2024-04-18 19:31:58.520070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:42.630 [2024-04-18 19:31:58.520101] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:38:42.630 [2024-04-18 19:31:58.520147] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:42.630 [2024-04-18 19:31:58.522569] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:42.630 [2024-04-18 19:31:58.522620] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:42.630 BaseBdev1 00:38:42.630 19:31:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:38:42.630 19:31:58 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:38:42.630 19:31:58 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:43.196 BaseBdev2_malloc 00:38:43.196 19:31:58 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:43.455 [2024-04-18 19:31:59.156330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:43.455 [2024-04-18 19:31:59.156429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:43.455 [2024-04-18 19:31:59.156471] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:38:43.455 [2024-04-18 19:31:59.156521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:43.455 [2024-04-18 19:31:59.158935] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:43.455 [2024-04-18 19:31:59.159003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:43.455 BaseBdev2 00:38:43.455 19:31:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:38:43.455 19:31:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:38:43.455 19:31:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:43.713 BaseBdev3_malloc 00:38:43.713 19:31:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:43.972 [2024-04-18 19:31:59.667386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:43.972 [2024-04-18 19:31:59.667487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:43.972 [2024-04-18 19:31:59.667526] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:38:43.972 [2024-04-18 19:31:59.667564] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:43.972 [2024-04-18 19:31:59.669737] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:43.972 [2024-04-18 19:31:59.669789] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:43.972 BaseBdev3 00:38:43.972 19:31:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:38:43.972 19:31:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:38:43.972 19:31:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:44.230 BaseBdev4_malloc 00:38:44.230 19:31:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:44.488 [2024-04-18 19:32:00.208703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:44.488 [2024-04-18 19:32:00.208807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:44.488 [2024-04-18 19:32:00.208842] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:44.488 [2024-04-18 19:32:00.208890] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:44.488 [2024-04-18 19:32:00.211439] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:44.488 [2024-04-18 19:32:00.211504] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:44.488 BaseBdev4 00:38:44.488 19:32:00 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:38:44.745 spare_malloc 00:38:44.745 19:32:00 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:45.004 spare_delay 00:38:45.004 19:32:00 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:45.262 [2024-04-18 19:32:01.054205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:45.262 [2024-04-18 19:32:01.054288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:45.262 [2024-04-18 19:32:01.054318] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:45.262 [2024-04-18 19:32:01.054360] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:45.262 [2024-04-18 19:32:01.056865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:45.262 [2024-04-18 19:32:01.056942] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:45.262 spare 00:38:45.262 19:32:01 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:38:45.521 [2024-04-18 19:32:01.306331] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:45.521 [2024-04-18 19:32:01.308471] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:45.521 [2024-04-18 19:32:01.308554] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:45.521 [2024-04-18 19:32:01.308596] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:45.521 [2024-04-18 19:32:01.308788] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:38:45.521 [2024-04-18 19:32:01.308802] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:38:45.521 [2024-04-18 19:32:01.308952] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:45.521 [2024-04-18 19:32:01.309283] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:38:45.521 [2024-04-18 19:32:01.309299] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:38:45.521 [2024-04-18 19:32:01.309450] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:45.521 19:32:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.779 19:32:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:45.779 "name": "raid_bdev1", 00:38:45.779 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:38:45.779 "strip_size_kb": 0, 00:38:45.779 "state": "online", 00:38:45.779 "raid_level": "raid1", 00:38:45.779 "superblock": true, 00:38:45.779 "num_base_bdevs": 4, 00:38:45.779 "num_base_bdevs_discovered": 4, 00:38:45.779 "num_base_bdevs_operational": 4, 00:38:45.779 "base_bdevs_list": [ 00:38:45.779 { 00:38:45.779 "name": "BaseBdev1", 00:38:45.779 "uuid": "fef4592c-f80a-5910-a761-8ffce48453b2", 00:38:45.779 "is_configured": true, 00:38:45.779 "data_offset": 2048, 00:38:45.779 "data_size": 63488 00:38:45.779 }, 00:38:45.779 { 00:38:45.779 "name": "BaseBdev2", 00:38:45.779 "uuid": "9106ee27-1064-5bdd-ac8f-e9298813f877", 00:38:45.779 "is_configured": true, 00:38:45.779 "data_offset": 2048, 00:38:45.779 "data_size": 63488 00:38:45.779 }, 00:38:45.779 { 00:38:45.779 "name": "BaseBdev3", 00:38:45.779 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:38:45.779 "is_configured": true, 00:38:45.779 "data_offset": 2048, 00:38:45.779 "data_size": 63488 00:38:45.779 }, 00:38:45.779 { 00:38:45.779 "name": "BaseBdev4", 00:38:45.779 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:38:45.779 "is_configured": true, 00:38:45.779 "data_offset": 2048, 00:38:45.779 "data_size": 63488 00:38:45.779 } 00:38:45.779 ] 00:38:45.779 }' 00:38:45.779 19:32:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:45.779 19:32:01 -- common/autotest_common.sh@10 -- # set +x 00:38:46.712 19:32:02 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:46.712 19:32:02 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:38:46.712 [2024-04-18 19:32:02.506907] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:46.712 19:32:02 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:38:46.712 19:32:02 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:46.713 19:32:02 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:46.970 19:32:02 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:38:46.970 19:32:02 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:38:46.970 19:32:02 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:38:46.970 19:32:02 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:38:46.970 19:32:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:46.970 19:32:02 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:46.970 19:32:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:46.970 19:32:02 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:46.970 19:32:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:46.970 19:32:02 -- bdev/nbd_common.sh@12 -- # local i 00:38:46.970 19:32:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:46.970 19:32:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:46.970 19:32:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:47.228 [2024-04-18 19:32:02.951676] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:47.228 /dev/nbd0 00:38:47.228 19:32:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:47.228 19:32:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:47.228 19:32:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:38:47.228 19:32:03 -- common/autotest_common.sh@855 -- # local i 00:38:47.228 19:32:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:38:47.228 19:32:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:38:47.228 19:32:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:38:47.228 19:32:03 -- common/autotest_common.sh@859 -- # break 00:38:47.228 19:32:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:47.228 19:32:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:47.228 19:32:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:47.228 1+0 records in 00:38:47.228 1+0 records out 00:38:47.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414741 s, 9.9 MB/s 00:38:47.228 19:32:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:47.228 19:32:03 -- common/autotest_common.sh@872 -- # size=4096 00:38:47.228 19:32:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:47.228 19:32:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:38:47.228 19:32:03 -- common/autotest_common.sh@875 -- # return 0 00:38:47.228 19:32:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:47.228 19:32:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:47.228 19:32:03 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:38:47.228 19:32:03 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:38:47.228 19:32:03 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:38:55.352 63488+0 records in 00:38:55.352 63488+0 records out 00:38:55.352 32505856 bytes (33 MB, 31 MiB) copied, 7.00696 s, 4.6 MB/s 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@51 -- # local i 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:55.352 [2024-04-18 19:32:10.340651] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@41 -- # break 00:38:55.352 19:32:10 -- bdev/nbd_common.sh@45 -- # return 0 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:38:55.352 [2024-04-18 19:32:10.612519] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:55.352 "name": "raid_bdev1", 00:38:55.352 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:38:55.352 "strip_size_kb": 0, 00:38:55.352 "state": "online", 00:38:55.352 "raid_level": "raid1", 00:38:55.352 "superblock": true, 00:38:55.352 "num_base_bdevs": 4, 00:38:55.352 "num_base_bdevs_discovered": 3, 00:38:55.352 "num_base_bdevs_operational": 3, 00:38:55.352 "base_bdevs_list": [ 00:38:55.352 { 00:38:55.352 "name": null, 00:38:55.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.352 "is_configured": false, 00:38:55.352 "data_offset": 2048, 00:38:55.352 "data_size": 63488 00:38:55.352 }, 00:38:55.352 { 00:38:55.352 "name": "BaseBdev2", 00:38:55.352 "uuid": "9106ee27-1064-5bdd-ac8f-e9298813f877", 00:38:55.352 "is_configured": true, 00:38:55.352 "data_offset": 2048, 00:38:55.352 "data_size": 63488 00:38:55.352 }, 00:38:55.352 { 00:38:55.352 "name": "BaseBdev3", 00:38:55.352 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:38:55.352 "is_configured": true, 00:38:55.352 "data_offset": 2048, 00:38:55.352 "data_size": 63488 00:38:55.352 }, 00:38:55.352 { 00:38:55.352 "name": "BaseBdev4", 00:38:55.352 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:38:55.352 "is_configured": true, 00:38:55.352 "data_offset": 2048, 00:38:55.352 "data_size": 63488 00:38:55.352 } 00:38:55.352 ] 00:38:55.352 }' 00:38:55.352 19:32:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:55.352 19:32:10 -- common/autotest_common.sh@10 -- # set +x 00:38:55.919 19:32:11 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:56.177 [2024-04-18 19:32:11.848806] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:38:56.177 [2024-04-18 19:32:11.848859] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:56.177 [2024-04-18 19:32:11.867439] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4bc0 00:38:56.177 [2024-04-18 19:32:11.869619] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:56.177 19:32:11 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:38:57.113 19:32:12 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:57.113 19:32:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:57.113 19:32:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:38:57.113 19:32:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:38:57.113 19:32:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:57.113 19:32:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:57.113 19:32:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.379 19:32:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:57.379 "name": "raid_bdev1", 00:38:57.379 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:38:57.379 "strip_size_kb": 0, 00:38:57.379 "state": "online", 00:38:57.379 "raid_level": "raid1", 00:38:57.379 "superblock": true, 00:38:57.379 "num_base_bdevs": 4, 00:38:57.379 "num_base_bdevs_discovered": 4, 00:38:57.379 "num_base_bdevs_operational": 4, 00:38:57.379 "process": { 00:38:57.379 "type": "rebuild", 00:38:57.379 "target": "spare", 00:38:57.379 "progress": { 00:38:57.379 "blocks": 24576, 00:38:57.379 "percent": 38 00:38:57.379 } 00:38:57.379 }, 00:38:57.379 "base_bdevs_list": [ 00:38:57.379 { 00:38:57.379 "name": "spare", 00:38:57.379 "uuid": "0e391ec4-3675-5c09-b154-a366235e61e7", 00:38:57.379 "is_configured": true, 00:38:57.379 "data_offset": 2048, 00:38:57.379 "data_size": 63488 00:38:57.379 }, 00:38:57.379 { 00:38:57.379 "name": "BaseBdev2", 00:38:57.379 "uuid": "9106ee27-1064-5bdd-ac8f-e9298813f877", 00:38:57.379 "is_configured": true, 00:38:57.379 "data_offset": 2048, 00:38:57.379 "data_size": 63488 00:38:57.379 }, 00:38:57.379 { 00:38:57.379 "name": "BaseBdev3", 00:38:57.379 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:38:57.379 "is_configured": true, 00:38:57.379 "data_offset": 2048, 00:38:57.379 "data_size": 63488 00:38:57.379 }, 00:38:57.379 { 00:38:57.379 "name": "BaseBdev4", 00:38:57.379 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:38:57.379 "is_configured": true, 00:38:57.379 "data_offset": 2048, 00:38:57.379 "data_size": 63488 00:38:57.379 } 00:38:57.379 ] 00:38:57.379 }' 00:38:57.379 19:32:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:57.379 19:32:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:57.379 19:32:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:57.379 19:32:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:38:57.379 19:32:13 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:57.640 [2024-04-18 19:32:13.535967] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:57.897 [2024-04-18 19:32:13.579946] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:57.897 [2024-04-18 19:32:13.580062] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:57.897 19:32:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:58.155 19:32:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:58.155 "name": "raid_bdev1", 00:38:58.155 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:38:58.155 "strip_size_kb": 0, 00:38:58.155 "state": "online", 00:38:58.155 "raid_level": "raid1", 00:38:58.155 "superblock": true, 00:38:58.155 "num_base_bdevs": 4, 00:38:58.155 "num_base_bdevs_discovered": 3, 00:38:58.155 "num_base_bdevs_operational": 3, 00:38:58.155 "base_bdevs_list": [ 00:38:58.155 { 00:38:58.155 "name": null, 00:38:58.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:58.155 "is_configured": false, 00:38:58.155 "data_offset": 2048, 00:38:58.155 "data_size": 63488 00:38:58.155 }, 00:38:58.155 { 00:38:58.155 "name": "BaseBdev2", 00:38:58.155 "uuid": "9106ee27-1064-5bdd-ac8f-e9298813f877", 00:38:58.155 "is_configured": true, 00:38:58.155 "data_offset": 2048, 00:38:58.155 "data_size": 63488 00:38:58.155 }, 00:38:58.155 { 00:38:58.155 "name": "BaseBdev3", 00:38:58.155 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:38:58.155 "is_configured": true, 00:38:58.155 "data_offset": 2048, 00:38:58.155 "data_size": 63488 00:38:58.155 }, 00:38:58.155 { 00:38:58.155 "name": "BaseBdev4", 00:38:58.155 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:38:58.155 "is_configured": true, 00:38:58.155 "data_offset": 2048, 00:38:58.155 "data_size": 63488 00:38:58.155 } 00:38:58.155 ] 00:38:58.155 }' 00:38:58.155 19:32:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:58.155 19:32:13 -- common/autotest_common.sh@10 -- # set +x 00:38:58.724 19:32:14 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:58.724 19:32:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:38:58.724 19:32:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:38:58.724 19:32:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:38:58.724 19:32:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:38:58.724 19:32:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:58.724 19:32:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:58.982 19:32:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:38:58.982 "name": "raid_bdev1", 00:38:58.982 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:38:58.982 "strip_size_kb": 0, 00:38:58.982 "state": "online", 00:38:58.982 "raid_level": "raid1", 00:38:58.982 "superblock": true, 00:38:58.982 "num_base_bdevs": 4, 00:38:58.982 "num_base_bdevs_discovered": 3, 00:38:58.982 "num_base_bdevs_operational": 3, 00:38:58.982 "base_bdevs_list": [ 00:38:58.982 { 00:38:58.982 "name": null, 00:38:58.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:58.982 "is_configured": false, 00:38:58.982 "data_offset": 2048, 00:38:58.982 "data_size": 63488 00:38:58.982 }, 00:38:58.982 { 00:38:58.982 "name": "BaseBdev2", 00:38:58.982 "uuid": "9106ee27-1064-5bdd-ac8f-e9298813f877", 00:38:58.982 "is_configured": true, 00:38:58.982 "data_offset": 2048, 00:38:58.982 "data_size": 63488 00:38:58.982 }, 00:38:58.982 { 00:38:58.982 "name": "BaseBdev3", 00:38:58.982 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:38:58.982 "is_configured": true, 00:38:58.982 "data_offset": 2048, 00:38:58.982 "data_size": 63488 00:38:58.982 }, 00:38:58.982 { 00:38:58.982 "name": "BaseBdev4", 00:38:58.982 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:38:58.982 "is_configured": true, 00:38:58.982 "data_offset": 2048, 00:38:58.982 "data_size": 63488 00:38:58.982 } 00:38:58.982 ] 00:38:58.982 }' 00:38:58.982 19:32:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:38:59.241 19:32:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:59.241 19:32:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:38:59.241 19:32:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:38:59.241 19:32:14 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:59.241 [2024-04-18 19:32:15.166435] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:38:59.241 [2024-04-18 19:32:15.166490] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:59.500 [2024-04-18 19:32:15.183704] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4d60 00:38:59.500 [2024-04-18 19:32:15.185990] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:59.500 19:32:15 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:39:00.433 19:32:16 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:00.433 19:32:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:00.433 19:32:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:00.433 19:32:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:00.433 19:32:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:00.433 19:32:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:00.433 19:32:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:00.735 "name": "raid_bdev1", 00:39:00.735 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:39:00.735 "strip_size_kb": 0, 00:39:00.735 "state": "online", 00:39:00.735 "raid_level": "raid1", 00:39:00.735 "superblock": true, 00:39:00.735 "num_base_bdevs": 4, 00:39:00.735 "num_base_bdevs_discovered": 4, 00:39:00.735 "num_base_bdevs_operational": 4, 00:39:00.735 "process": { 00:39:00.735 "type": "rebuild", 00:39:00.735 "target": "spare", 00:39:00.735 "progress": { 00:39:00.735 "blocks": 24576, 00:39:00.735 "percent": 38 00:39:00.735 } 00:39:00.735 }, 00:39:00.735 "base_bdevs_list": [ 00:39:00.735 { 00:39:00.735 "name": "spare", 00:39:00.735 "uuid": "0e391ec4-3675-5c09-b154-a366235e61e7", 00:39:00.735 "is_configured": true, 00:39:00.735 "data_offset": 2048, 00:39:00.735 "data_size": 63488 00:39:00.735 }, 00:39:00.735 { 00:39:00.735 "name": "BaseBdev2", 00:39:00.735 "uuid": "9106ee27-1064-5bdd-ac8f-e9298813f877", 00:39:00.735 "is_configured": true, 00:39:00.735 "data_offset": 2048, 00:39:00.735 "data_size": 63488 00:39:00.735 }, 00:39:00.735 { 00:39:00.735 "name": "BaseBdev3", 00:39:00.735 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:39:00.735 "is_configured": true, 00:39:00.735 "data_offset": 2048, 00:39:00.735 "data_size": 63488 00:39:00.735 }, 00:39:00.735 { 00:39:00.735 "name": "BaseBdev4", 00:39:00.735 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:39:00.735 "is_configured": true, 00:39:00.735 "data_offset": 2048, 00:39:00.735 "data_size": 63488 00:39:00.735 } 00:39:00.735 ] 00:39:00.735 }' 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:39:00.735 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:39:00.735 19:32:16 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:39:01.000 [2024-04-18 19:32:16.880299] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:01.000 [2024-04-18 19:32:16.896245] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca4d60 00:39:01.258 19:32:17 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:39:01.258 19:32:17 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:39:01.258 19:32:17 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:01.258 19:32:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:01.258 19:32:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:01.258 19:32:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:01.258 19:32:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:01.258 19:32:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:01.258 19:32:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:01.517 "name": "raid_bdev1", 00:39:01.517 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:39:01.517 "strip_size_kb": 0, 00:39:01.517 "state": "online", 00:39:01.517 "raid_level": "raid1", 00:39:01.517 "superblock": true, 00:39:01.517 "num_base_bdevs": 4, 00:39:01.517 "num_base_bdevs_discovered": 3, 00:39:01.517 "num_base_bdevs_operational": 3, 00:39:01.517 "process": { 00:39:01.517 "type": "rebuild", 00:39:01.517 "target": "spare", 00:39:01.517 "progress": { 00:39:01.517 "blocks": 43008, 00:39:01.517 "percent": 67 00:39:01.517 } 00:39:01.517 }, 00:39:01.517 "base_bdevs_list": [ 00:39:01.517 { 00:39:01.517 "name": "spare", 00:39:01.517 "uuid": "0e391ec4-3675-5c09-b154-a366235e61e7", 00:39:01.517 "is_configured": true, 00:39:01.517 "data_offset": 2048, 00:39:01.517 "data_size": 63488 00:39:01.517 }, 00:39:01.517 { 00:39:01.517 "name": null, 00:39:01.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:01.517 "is_configured": false, 00:39:01.517 "data_offset": 2048, 00:39:01.517 "data_size": 63488 00:39:01.517 }, 00:39:01.517 { 00:39:01.517 "name": "BaseBdev3", 00:39:01.517 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:39:01.517 "is_configured": true, 00:39:01.517 "data_offset": 2048, 00:39:01.517 "data_size": 63488 00:39:01.517 }, 00:39:01.517 { 00:39:01.517 "name": "BaseBdev4", 00:39:01.517 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:39:01.517 "is_configured": true, 00:39:01.517 "data_offset": 2048, 00:39:01.517 "data_size": 63488 00:39:01.517 } 00:39:01.517 ] 00:39:01.517 }' 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@657 -- # local timeout=582 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:01.517 19:32:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:02.085 19:32:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:02.085 "name": "raid_bdev1", 00:39:02.085 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:39:02.085 "strip_size_kb": 0, 00:39:02.085 "state": "online", 00:39:02.085 "raid_level": "raid1", 00:39:02.085 "superblock": true, 00:39:02.085 "num_base_bdevs": 4, 00:39:02.085 "num_base_bdevs_discovered": 3, 00:39:02.085 "num_base_bdevs_operational": 3, 00:39:02.085 "process": { 00:39:02.085 "type": "rebuild", 00:39:02.085 "target": "spare", 00:39:02.085 "progress": { 00:39:02.085 "blocks": 51200, 00:39:02.085 "percent": 80 00:39:02.085 } 00:39:02.085 }, 00:39:02.085 "base_bdevs_list": [ 00:39:02.085 { 00:39:02.085 "name": "spare", 00:39:02.085 "uuid": "0e391ec4-3675-5c09-b154-a366235e61e7", 00:39:02.085 "is_configured": true, 00:39:02.085 "data_offset": 2048, 00:39:02.085 "data_size": 63488 00:39:02.085 }, 00:39:02.085 { 00:39:02.085 "name": null, 00:39:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.085 "is_configured": false, 00:39:02.085 "data_offset": 2048, 00:39:02.085 "data_size": 63488 00:39:02.085 }, 00:39:02.085 { 00:39:02.085 "name": "BaseBdev3", 00:39:02.085 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:39:02.085 "is_configured": true, 00:39:02.085 "data_offset": 2048, 00:39:02.085 "data_size": 63488 00:39:02.085 }, 00:39:02.085 { 00:39:02.085 "name": "BaseBdev4", 00:39:02.085 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:39:02.085 "is_configured": true, 00:39:02.085 "data_offset": 2048, 00:39:02.085 "data_size": 63488 00:39:02.085 } 00:39:02.085 ] 00:39:02.085 }' 00:39:02.085 19:32:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:02.085 19:32:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:02.085 19:32:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:02.085 19:32:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:02.085 19:32:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:39:02.653 [2024-04-18 19:32:18.304379] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:02.653 [2024-04-18 19:32:18.304455] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:02.653 [2024-04-18 19:32:18.304596] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:03.219 19:32:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:39:03.219 19:32:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:03.219 19:32:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:03.219 19:32:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:03.219 19:32:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:03.219 19:32:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:03.219 19:32:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:03.219 19:32:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:03.219 19:32:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:03.219 "name": "raid_bdev1", 00:39:03.219 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:39:03.219 "strip_size_kb": 0, 00:39:03.219 "state": "online", 00:39:03.219 "raid_level": "raid1", 00:39:03.219 "superblock": true, 00:39:03.219 "num_base_bdevs": 4, 00:39:03.219 "num_base_bdevs_discovered": 3, 00:39:03.219 "num_base_bdevs_operational": 3, 00:39:03.219 "base_bdevs_list": [ 00:39:03.219 { 00:39:03.219 "name": "spare", 00:39:03.219 "uuid": "0e391ec4-3675-5c09-b154-a366235e61e7", 00:39:03.219 "is_configured": true, 00:39:03.219 "data_offset": 2048, 00:39:03.219 "data_size": 63488 00:39:03.219 }, 00:39:03.219 { 00:39:03.219 "name": null, 00:39:03.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:03.219 "is_configured": false, 00:39:03.219 "data_offset": 2048, 00:39:03.219 "data_size": 63488 00:39:03.219 }, 00:39:03.219 { 00:39:03.219 "name": "BaseBdev3", 00:39:03.219 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:39:03.219 "is_configured": true, 00:39:03.219 "data_offset": 2048, 00:39:03.219 "data_size": 63488 00:39:03.219 }, 00:39:03.219 { 00:39:03.219 "name": "BaseBdev4", 00:39:03.219 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:39:03.219 "is_configured": true, 00:39:03.219 "data_offset": 2048, 00:39:03.219 "data_size": 63488 00:39:03.219 } 00:39:03.219 ] 00:39:03.219 }' 00:39:03.219 19:32:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:03.477 19:32:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:03.477 19:32:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:03.477 19:32:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:39:03.477 19:32:19 -- bdev/bdev_raid.sh@660 -- # break 00:39:03.477 19:32:19 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:03.477 19:32:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:03.478 19:32:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:39:03.478 19:32:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:39:03.478 19:32:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:03.478 19:32:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:03.478 19:32:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:03.736 "name": "raid_bdev1", 00:39:03.736 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:39:03.736 "strip_size_kb": 0, 00:39:03.736 "state": "online", 00:39:03.736 "raid_level": "raid1", 00:39:03.736 "superblock": true, 00:39:03.736 "num_base_bdevs": 4, 00:39:03.736 "num_base_bdevs_discovered": 3, 00:39:03.736 "num_base_bdevs_operational": 3, 00:39:03.736 "base_bdevs_list": [ 00:39:03.736 { 00:39:03.736 "name": "spare", 00:39:03.736 "uuid": "0e391ec4-3675-5c09-b154-a366235e61e7", 00:39:03.736 "is_configured": true, 00:39:03.736 "data_offset": 2048, 00:39:03.736 "data_size": 63488 00:39:03.736 }, 00:39:03.736 { 00:39:03.736 "name": null, 00:39:03.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:03.736 "is_configured": false, 00:39:03.736 "data_offset": 2048, 00:39:03.736 "data_size": 63488 00:39:03.736 }, 00:39:03.736 { 00:39:03.736 "name": "BaseBdev3", 00:39:03.736 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:39:03.736 "is_configured": true, 00:39:03.736 "data_offset": 2048, 00:39:03.736 "data_size": 63488 00:39:03.736 }, 00:39:03.736 { 00:39:03.736 "name": "BaseBdev4", 00:39:03.736 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:39:03.736 "is_configured": true, 00:39:03.736 "data_offset": 2048, 00:39:03.736 "data_size": 63488 00:39:03.736 } 00:39:03.736 ] 00:39:03.736 }' 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:03.736 19:32:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:03.995 19:32:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:03.995 "name": "raid_bdev1", 00:39:03.995 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:39:03.995 "strip_size_kb": 0, 00:39:03.995 "state": "online", 00:39:03.995 "raid_level": "raid1", 00:39:03.995 "superblock": true, 00:39:03.995 "num_base_bdevs": 4, 00:39:03.995 "num_base_bdevs_discovered": 3, 00:39:03.995 "num_base_bdevs_operational": 3, 00:39:03.995 "base_bdevs_list": [ 00:39:03.995 { 00:39:03.995 "name": "spare", 00:39:03.995 "uuid": "0e391ec4-3675-5c09-b154-a366235e61e7", 00:39:03.995 "is_configured": true, 00:39:03.995 "data_offset": 2048, 00:39:03.995 "data_size": 63488 00:39:03.995 }, 00:39:03.995 { 00:39:03.995 "name": null, 00:39:03.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:03.995 "is_configured": false, 00:39:03.995 "data_offset": 2048, 00:39:03.995 "data_size": 63488 00:39:03.995 }, 00:39:03.995 { 00:39:03.995 "name": "BaseBdev3", 00:39:03.995 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:39:03.995 "is_configured": true, 00:39:03.995 "data_offset": 2048, 00:39:03.995 "data_size": 63488 00:39:03.995 }, 00:39:03.995 { 00:39:03.995 "name": "BaseBdev4", 00:39:03.995 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:39:03.995 "is_configured": true, 00:39:03.995 "data_offset": 2048, 00:39:03.995 "data_size": 63488 00:39:03.995 } 00:39:03.995 ] 00:39:03.995 }' 00:39:03.995 19:32:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:03.995 19:32:19 -- common/autotest_common.sh@10 -- # set +x 00:39:04.931 19:32:20 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:04.931 [2024-04-18 19:32:20.810516] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:04.931 [2024-04-18 19:32:20.810700] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:04.931 [2024-04-18 19:32:20.810888] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:04.931 [2024-04-18 19:32:20.811174] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:04.931 [2024-04-18 19:32:20.811257] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:39:04.931 19:32:20 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:04.931 19:32:20 -- bdev/bdev_raid.sh@671 -- # jq length 00:39:05.499 19:32:21 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:39:05.499 19:32:21 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:39:05.499 19:32:21 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@12 -- # local i 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:39:05.499 /dev/nbd0 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:05.499 19:32:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:05.499 19:32:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:39:05.499 19:32:21 -- common/autotest_common.sh@855 -- # local i 00:39:05.499 19:32:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:05.499 19:32:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:05.499 19:32:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:39:05.499 19:32:21 -- common/autotest_common.sh@859 -- # break 00:39:05.499 19:32:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:05.499 19:32:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:05.499 19:32:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:05.499 1+0 records in 00:39:05.499 1+0 records out 00:39:05.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489857 s, 8.4 MB/s 00:39:05.499 19:32:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:05.499 19:32:21 -- common/autotest_common.sh@872 -- # size=4096 00:39:05.499 19:32:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:05.799 19:32:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:05.799 19:32:21 -- common/autotest_common.sh@875 -- # return 0 00:39:05.799 19:32:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:05.799 19:32:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:05.799 19:32:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:39:05.799 /dev/nbd1 00:39:05.799 19:32:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:06.057 19:32:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:06.057 19:32:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:39:06.057 19:32:21 -- common/autotest_common.sh@855 -- # local i 00:39:06.057 19:32:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:06.057 19:32:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:06.057 19:32:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:39:06.057 19:32:21 -- common/autotest_common.sh@859 -- # break 00:39:06.057 19:32:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:06.058 19:32:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:06.058 19:32:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:06.058 1+0 records in 00:39:06.058 1+0 records out 00:39:06.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404621 s, 10.1 MB/s 00:39:06.058 19:32:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:06.058 19:32:21 -- common/autotest_common.sh@872 -- # size=4096 00:39:06.058 19:32:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:06.058 19:32:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:06.058 19:32:21 -- common/autotest_common.sh@875 -- # return 0 00:39:06.058 19:32:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:06.058 19:32:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:06.058 19:32:21 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:06.058 19:32:21 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:39:06.058 19:32:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:06.058 19:32:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:39:06.058 19:32:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:06.058 19:32:21 -- bdev/nbd_common.sh@51 -- # local i 00:39:06.058 19:32:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:06.058 19:32:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@41 -- # break 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@45 -- # return 0 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:06.624 19:32:22 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:39:06.882 19:32:22 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:39:06.882 19:32:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:06.882 19:32:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:06.882 19:32:22 -- bdev/nbd_common.sh@41 -- # break 00:39:06.882 19:32:22 -- bdev/nbd_common.sh@45 -- # return 0 00:39:06.882 19:32:22 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:39:06.882 19:32:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:39:06.882 19:32:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:39:06.882 19:32:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:39:07.139 19:32:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:07.405 [2024-04-18 19:32:23.134073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:07.405 [2024-04-18 19:32:23.134167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:07.405 [2024-04-18 19:32:23.134210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:39:07.405 [2024-04-18 19:32:23.134231] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:07.405 [2024-04-18 19:32:23.136794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:07.405 [2024-04-18 19:32:23.136866] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:07.405 [2024-04-18 19:32:23.137000] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:07.405 [2024-04-18 19:32:23.137081] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:07.405 BaseBdev1 00:39:07.405 19:32:23 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:39:07.405 19:32:23 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:39:07.405 19:32:23 -- bdev/bdev_raid.sh@696 -- # continue 00:39:07.405 19:32:23 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:39:07.405 19:32:23 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:39:07.405 19:32:23 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:39:07.662 19:32:23 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:39:07.922 [2024-04-18 19:32:23.722241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:39:07.922 [2024-04-18 19:32:23.722344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:07.922 [2024-04-18 19:32:23.722389] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:39:07.922 [2024-04-18 19:32:23.722418] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:07.922 [2024-04-18 19:32:23.722940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:07.922 [2024-04-18 19:32:23.722992] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:07.922 [2024-04-18 19:32:23.723121] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:39:07.922 [2024-04-18 19:32:23.723135] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:39:07.922 [2024-04-18 19:32:23.723142] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:07.922 [2024-04-18 19:32:23.723163] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:39:07.922 [2024-04-18 19:32:23.723238] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:07.922 BaseBdev3 00:39:07.922 19:32:23 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:39:07.922 19:32:23 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:39:07.922 19:32:23 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:39:08.181 19:32:23 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:39:08.492 [2024-04-18 19:32:24.169835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:39:08.492 [2024-04-18 19:32:24.169946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:08.492 [2024-04-18 19:32:24.169986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:39:08.492 [2024-04-18 19:32:24.170013] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:08.492 [2024-04-18 19:32:24.170506] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:08.492 [2024-04-18 19:32:24.170557] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:39:08.492 [2024-04-18 19:32:24.170656] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:39:08.492 [2024-04-18 19:32:24.170681] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:08.492 BaseBdev4 00:39:08.492 19:32:24 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:08.750 19:32:24 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:09.008 [2024-04-18 19:32:24.749946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:09.008 [2024-04-18 19:32:24.750041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:09.008 [2024-04-18 19:32:24.750076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:39:09.008 [2024-04-18 19:32:24.750105] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:09.008 [2024-04-18 19:32:24.750614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:09.008 [2024-04-18 19:32:24.750673] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:09.008 [2024-04-18 19:32:24.750812] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:39:09.008 [2024-04-18 19:32:24.750856] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:09.008 spare 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:09.009 19:32:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:09.009 [2024-04-18 19:32:24.850976] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:39:09.009 [2024-04-18 19:32:24.851011] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:09.009 [2024-04-18 19:32:24.851158] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5970 00:39:09.009 [2024-04-18 19:32:24.851607] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:39:09.009 [2024-04-18 19:32:24.851627] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:39:09.009 [2024-04-18 19:32:24.851787] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:09.266 19:32:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:09.266 "name": "raid_bdev1", 00:39:09.267 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:39:09.267 "strip_size_kb": 0, 00:39:09.267 "state": "online", 00:39:09.267 "raid_level": "raid1", 00:39:09.267 "superblock": true, 00:39:09.267 "num_base_bdevs": 4, 00:39:09.267 "num_base_bdevs_discovered": 3, 00:39:09.267 "num_base_bdevs_operational": 3, 00:39:09.267 "base_bdevs_list": [ 00:39:09.267 { 00:39:09.267 "name": "spare", 00:39:09.267 "uuid": "0e391ec4-3675-5c09-b154-a366235e61e7", 00:39:09.267 "is_configured": true, 00:39:09.267 "data_offset": 2048, 00:39:09.267 "data_size": 63488 00:39:09.267 }, 00:39:09.267 { 00:39:09.267 "name": null, 00:39:09.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:09.267 "is_configured": false, 00:39:09.267 "data_offset": 2048, 00:39:09.267 "data_size": 63488 00:39:09.267 }, 00:39:09.267 { 00:39:09.267 "name": "BaseBdev3", 00:39:09.267 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:39:09.267 "is_configured": true, 00:39:09.267 "data_offset": 2048, 00:39:09.267 "data_size": 63488 00:39:09.267 }, 00:39:09.267 { 00:39:09.267 "name": "BaseBdev4", 00:39:09.267 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:39:09.267 "is_configured": true, 00:39:09.267 "data_offset": 2048, 00:39:09.267 "data_size": 63488 00:39:09.267 } 00:39:09.267 ] 00:39:09.267 }' 00:39:09.267 19:32:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:09.267 19:32:25 -- common/autotest_common.sh@10 -- # set +x 00:39:09.834 19:32:25 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:09.834 19:32:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:09.834 19:32:25 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:39:09.834 19:32:25 -- bdev/bdev_raid.sh@185 -- # local target=none 00:39:09.834 19:32:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:09.834 19:32:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:09.834 19:32:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:10.401 19:32:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:10.401 "name": "raid_bdev1", 00:39:10.401 "uuid": "12a4248b-dbfc-4848-8296-bb63abdbe38c", 00:39:10.401 "strip_size_kb": 0, 00:39:10.401 "state": "online", 00:39:10.401 "raid_level": "raid1", 00:39:10.401 "superblock": true, 00:39:10.401 "num_base_bdevs": 4, 00:39:10.401 "num_base_bdevs_discovered": 3, 00:39:10.401 "num_base_bdevs_operational": 3, 00:39:10.401 "base_bdevs_list": [ 00:39:10.401 { 00:39:10.401 "name": "spare", 00:39:10.401 "uuid": "0e391ec4-3675-5c09-b154-a366235e61e7", 00:39:10.401 "is_configured": true, 00:39:10.401 "data_offset": 2048, 00:39:10.401 "data_size": 63488 00:39:10.401 }, 00:39:10.401 { 00:39:10.401 "name": null, 00:39:10.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:10.401 "is_configured": false, 00:39:10.401 "data_offset": 2048, 00:39:10.401 "data_size": 63488 00:39:10.401 }, 00:39:10.401 { 00:39:10.401 "name": "BaseBdev3", 00:39:10.401 "uuid": "a3f1c201-d659-56e3-9d3a-471c6620ff72", 00:39:10.401 "is_configured": true, 00:39:10.401 "data_offset": 2048, 00:39:10.401 "data_size": 63488 00:39:10.401 }, 00:39:10.401 { 00:39:10.401 "name": "BaseBdev4", 00:39:10.401 "uuid": "825caf6d-23b2-5f66-8743-acefac26eeac", 00:39:10.401 "is_configured": true, 00:39:10.401 "data_offset": 2048, 00:39:10.401 "data_size": 63488 00:39:10.401 } 00:39:10.401 ] 00:39:10.401 }' 00:39:10.401 19:32:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:10.401 19:32:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:10.401 19:32:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:10.401 19:32:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:39:10.401 19:32:26 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:10.401 19:32:26 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:10.660 19:32:26 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:39:10.660 19:32:26 -- bdev/bdev_raid.sh@709 -- # killprocess 135603 00:39:10.660 19:32:26 -- common/autotest_common.sh@936 -- # '[' -z 135603 ']' 00:39:10.660 19:32:26 -- common/autotest_common.sh@940 -- # kill -0 135603 00:39:10.660 19:32:26 -- common/autotest_common.sh@941 -- # uname 00:39:10.660 19:32:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:39:10.660 19:32:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135603 00:39:10.660 killing process with pid 135603 00:39:10.660 Received shutdown signal, test time was about 60.000000 seconds 00:39:10.660 00:39:10.660 Latency(us) 00:39:10.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.660 =================================================================================================================== 00:39:10.660 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:10.660 19:32:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:39:10.660 19:32:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:39:10.660 19:32:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135603' 00:39:10.660 19:32:26 -- common/autotest_common.sh@955 -- # kill 135603 00:39:10.660 19:32:26 -- common/autotest_common.sh@960 -- # wait 135603 00:39:10.660 [2024-04-18 19:32:26.452249] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:10.660 [2024-04-18 19:32:26.452336] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:10.660 [2024-04-18 19:32:26.452414] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:10.660 [2024-04-18 19:32:26.452424] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:39:11.226 [2024-04-18 19:32:27.030047] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:13.126 ************************************ 00:39:13.126 END TEST raid_rebuild_test_sb 00:39:13.126 ************************************ 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@711 -- # return 0 00:39:13.126 00:39:13.126 real 0m31.447s 00:39:13.126 user 0m45.596s 00:39:13.126 sys 0m5.300s 00:39:13.126 19:32:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:13.126 19:32:28 -- common/autotest_common.sh@10 -- # set +x 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:39:13.126 19:32:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:39:13.126 19:32:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:13.126 19:32:28 -- common/autotest_common.sh@10 -- # set +x 00:39:13.126 ************************************ 00:39:13.126 START TEST raid_rebuild_test_io 00:39:13.126 ************************************ 00:39:13.126 19:32:28 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 false true 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@544 -- # raid_pid=136357 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136357 /var/tmp/spdk-raid.sock 00:39:13.126 19:32:28 -- common/autotest_common.sh@817 -- # '[' -z 136357 ']' 00:39:13.126 19:32:28 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:13.126 19:32:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:13.126 19:32:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:39:13.126 19:32:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:13.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:13.126 19:32:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:39:13.126 19:32:28 -- common/autotest_common.sh@10 -- # set +x 00:39:13.126 [2024-04-18 19:32:28.717544] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:39:13.126 [2024-04-18 19:32:28.717727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136357 ] 00:39:13.126 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:13.126 Zero copy mechanism will not be used. 00:39:13.126 [2024-04-18 19:32:28.880589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.385 [2024-04-18 19:32:29.117817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.644 [2024-04-18 19:32:29.430843] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:13.902 19:32:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:39:13.902 19:32:29 -- common/autotest_common.sh@850 -- # return 0 00:39:13.902 19:32:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:13.902 19:32:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:39:13.902 19:32:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:39:14.161 BaseBdev1 00:39:14.161 19:32:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:14.161 19:32:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:39:14.161 19:32:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:39:14.727 BaseBdev2 00:39:14.727 19:32:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:14.727 19:32:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:39:14.727 19:32:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:39:14.727 BaseBdev3 00:39:14.984 19:32:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:14.984 19:32:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:39:14.984 19:32:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:39:14.984 BaseBdev4 00:39:15.242 19:32:30 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:39:15.242 spare_malloc 00:39:15.501 19:32:31 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:15.501 spare_delay 00:39:15.501 19:32:31 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:15.759 [2024-04-18 19:32:31.657274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:15.759 [2024-04-18 19:32:31.657389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:15.759 [2024-04-18 19:32:31.657425] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:15.759 [2024-04-18 19:32:31.657472] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:15.759 [2024-04-18 19:32:31.660064] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:15.759 [2024-04-18 19:32:31.660120] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:15.759 spare 00:39:15.759 19:32:31 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:39:16.334 [2024-04-18 19:32:31.945379] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:16.334 [2024-04-18 19:32:31.947671] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:16.334 [2024-04-18 19:32:31.947739] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:16.334 [2024-04-18 19:32:31.947774] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:16.334 [2024-04-18 19:32:31.947842] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:39:16.334 [2024-04-18 19:32:31.947852] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:39:16.334 [2024-04-18 19:32:31.948018] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:39:16.334 [2024-04-18 19:32:31.948347] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:39:16.334 [2024-04-18 19:32:31.948359] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:39:16.334 [2024-04-18 19:32:31.948543] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:16.334 19:32:31 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:16.334 19:32:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:16.334 19:32:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:16.334 19:32:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:16.335 19:32:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:16.335 19:32:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:16.335 19:32:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:16.335 19:32:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:16.335 19:32:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:16.335 19:32:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:16.335 19:32:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:16.335 19:32:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:16.594 19:32:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:16.594 "name": "raid_bdev1", 00:39:16.594 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:16.594 "strip_size_kb": 0, 00:39:16.594 "state": "online", 00:39:16.594 "raid_level": "raid1", 00:39:16.594 "superblock": false, 00:39:16.594 "num_base_bdevs": 4, 00:39:16.594 "num_base_bdevs_discovered": 4, 00:39:16.594 "num_base_bdevs_operational": 4, 00:39:16.594 "base_bdevs_list": [ 00:39:16.594 { 00:39:16.594 "name": "BaseBdev1", 00:39:16.594 "uuid": "a2507b8f-d2af-43c1-9b8c-b1808fa9951b", 00:39:16.594 "is_configured": true, 00:39:16.594 "data_offset": 0, 00:39:16.594 "data_size": 65536 00:39:16.594 }, 00:39:16.594 { 00:39:16.594 "name": "BaseBdev2", 00:39:16.594 "uuid": "6c9ecabe-719a-4ba3-932e-93f2e1bd2f67", 00:39:16.594 "is_configured": true, 00:39:16.594 "data_offset": 0, 00:39:16.594 "data_size": 65536 00:39:16.594 }, 00:39:16.594 { 00:39:16.594 "name": "BaseBdev3", 00:39:16.594 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:16.594 "is_configured": true, 00:39:16.594 "data_offset": 0, 00:39:16.594 "data_size": 65536 00:39:16.594 }, 00:39:16.594 { 00:39:16.594 "name": "BaseBdev4", 00:39:16.594 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:16.594 "is_configured": true, 00:39:16.594 "data_offset": 0, 00:39:16.594 "data_size": 65536 00:39:16.594 } 00:39:16.594 ] 00:39:16.594 }' 00:39:16.594 19:32:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:16.594 19:32:32 -- common/autotest_common.sh@10 -- # set +x 00:39:17.163 19:32:32 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:17.163 19:32:32 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:39:17.422 [2024-04-18 19:32:33.217921] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:17.422 19:32:33 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:39:17.422 19:32:33 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:17.422 19:32:33 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:17.680 19:32:33 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:39:17.680 19:32:33 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:39:17.680 19:32:33 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:39:17.680 19:32:33 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:39:17.939 [2024-04-18 19:32:33.614987] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:39:17.939 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:17.939 Zero copy mechanism will not be used. 00:39:17.939 Running I/O for 60 seconds... 00:39:17.939 [2024-04-18 19:32:33.790701] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:17.939 [2024-04-18 19:32:33.797140] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:17.939 19:32:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.201 19:32:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:18.201 "name": "raid_bdev1", 00:39:18.201 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:18.201 "strip_size_kb": 0, 00:39:18.201 "state": "online", 00:39:18.201 "raid_level": "raid1", 00:39:18.201 "superblock": false, 00:39:18.201 "num_base_bdevs": 4, 00:39:18.201 "num_base_bdevs_discovered": 3, 00:39:18.201 "num_base_bdevs_operational": 3, 00:39:18.201 "base_bdevs_list": [ 00:39:18.201 { 00:39:18.201 "name": null, 00:39:18.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.201 "is_configured": false, 00:39:18.201 "data_offset": 0, 00:39:18.201 "data_size": 65536 00:39:18.201 }, 00:39:18.201 { 00:39:18.201 "name": "BaseBdev2", 00:39:18.201 "uuid": "6c9ecabe-719a-4ba3-932e-93f2e1bd2f67", 00:39:18.201 "is_configured": true, 00:39:18.201 "data_offset": 0, 00:39:18.201 "data_size": 65536 00:39:18.201 }, 00:39:18.201 { 00:39:18.201 "name": "BaseBdev3", 00:39:18.201 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:18.201 "is_configured": true, 00:39:18.201 "data_offset": 0, 00:39:18.201 "data_size": 65536 00:39:18.201 }, 00:39:18.201 { 00:39:18.201 "name": "BaseBdev4", 00:39:18.201 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:18.201 "is_configured": true, 00:39:18.201 "data_offset": 0, 00:39:18.201 "data_size": 65536 00:39:18.201 } 00:39:18.201 ] 00:39:18.201 }' 00:39:18.201 19:32:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:18.201 19:32:34 -- common/autotest_common.sh@10 -- # set +x 00:39:19.137 19:32:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:19.394 [2024-04-18 19:32:35.077073] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:39:19.394 [2024-04-18 19:32:35.077137] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:19.394 19:32:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:39:19.394 [2024-04-18 19:32:35.148668] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:39:19.394 [2024-04-18 19:32:35.150840] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:19.394 [2024-04-18 19:32:35.269501] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:19.394 [2024-04-18 19:32:35.270811] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:19.652 [2024-04-18 19:32:35.509516] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:19.652 [2024-04-18 19:32:35.509826] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:19.911 [2024-04-18 19:32:35.754246] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:20.175 [2024-04-18 19:32:35.984118] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:20.434 19:32:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:20.434 19:32:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:20.434 19:32:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:20.434 19:32:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:20.434 19:32:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:20.434 19:32:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:20.434 19:32:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:20.695 19:32:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:20.695 "name": "raid_bdev1", 00:39:20.695 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:20.695 "strip_size_kb": 0, 00:39:20.695 "state": "online", 00:39:20.695 "raid_level": "raid1", 00:39:20.695 "superblock": false, 00:39:20.695 "num_base_bdevs": 4, 00:39:20.695 "num_base_bdevs_discovered": 4, 00:39:20.695 "num_base_bdevs_operational": 4, 00:39:20.695 "process": { 00:39:20.695 "type": "rebuild", 00:39:20.695 "target": "spare", 00:39:20.695 "progress": { 00:39:20.695 "blocks": 14336, 00:39:20.695 "percent": 21 00:39:20.695 } 00:39:20.695 }, 00:39:20.695 "base_bdevs_list": [ 00:39:20.695 { 00:39:20.695 "name": "spare", 00:39:20.695 "uuid": "095fb62f-d560-5cb1-8851-65fa2c4de262", 00:39:20.695 "is_configured": true, 00:39:20.695 "data_offset": 0, 00:39:20.695 "data_size": 65536 00:39:20.695 }, 00:39:20.695 { 00:39:20.695 "name": "BaseBdev2", 00:39:20.695 "uuid": "6c9ecabe-719a-4ba3-932e-93f2e1bd2f67", 00:39:20.695 "is_configured": true, 00:39:20.695 "data_offset": 0, 00:39:20.695 "data_size": 65536 00:39:20.695 }, 00:39:20.695 { 00:39:20.695 "name": "BaseBdev3", 00:39:20.695 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:20.695 "is_configured": true, 00:39:20.695 "data_offset": 0, 00:39:20.695 "data_size": 65536 00:39:20.695 }, 00:39:20.695 { 00:39:20.695 "name": "BaseBdev4", 00:39:20.695 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:20.695 "is_configured": true, 00:39:20.695 "data_offset": 0, 00:39:20.695 "data_size": 65536 00:39:20.695 } 00:39:20.695 ] 00:39:20.695 }' 00:39:20.695 19:32:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:20.695 19:32:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:20.695 19:32:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:20.695 19:32:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:20.695 19:32:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:20.953 [2024-04-18 19:32:36.669614] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:39:20.953 [2024-04-18 19:32:36.818769] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:21.212 [2024-04-18 19:32:36.909929] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:39:21.212 [2024-04-18 19:32:36.962827] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:21.212 [2024-04-18 19:32:36.974929] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:21.212 [2024-04-18 19:32:37.017591] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:21.212 19:32:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:21.470 19:32:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:21.470 "name": "raid_bdev1", 00:39:21.470 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:21.470 "strip_size_kb": 0, 00:39:21.470 "state": "online", 00:39:21.470 "raid_level": "raid1", 00:39:21.470 "superblock": false, 00:39:21.470 "num_base_bdevs": 4, 00:39:21.470 "num_base_bdevs_discovered": 3, 00:39:21.470 "num_base_bdevs_operational": 3, 00:39:21.470 "base_bdevs_list": [ 00:39:21.470 { 00:39:21.470 "name": null, 00:39:21.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:21.470 "is_configured": false, 00:39:21.471 "data_offset": 0, 00:39:21.471 "data_size": 65536 00:39:21.471 }, 00:39:21.471 { 00:39:21.471 "name": "BaseBdev2", 00:39:21.471 "uuid": "6c9ecabe-719a-4ba3-932e-93f2e1bd2f67", 00:39:21.471 "is_configured": true, 00:39:21.471 "data_offset": 0, 00:39:21.471 "data_size": 65536 00:39:21.471 }, 00:39:21.471 { 00:39:21.471 "name": "BaseBdev3", 00:39:21.471 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:21.471 "is_configured": true, 00:39:21.471 "data_offset": 0, 00:39:21.471 "data_size": 65536 00:39:21.471 }, 00:39:21.471 { 00:39:21.471 "name": "BaseBdev4", 00:39:21.471 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:21.471 "is_configured": true, 00:39:21.471 "data_offset": 0, 00:39:21.471 "data_size": 65536 00:39:21.471 } 00:39:21.471 ] 00:39:21.471 }' 00:39:21.471 19:32:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:21.471 19:32:37 -- common/autotest_common.sh@10 -- # set +x 00:39:22.400 19:32:38 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:22.400 19:32:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:22.400 19:32:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:39:22.400 19:32:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:39:22.400 19:32:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:22.400 19:32:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:22.400 19:32:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:22.658 19:32:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:22.658 "name": "raid_bdev1", 00:39:22.658 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:22.658 "strip_size_kb": 0, 00:39:22.658 "state": "online", 00:39:22.658 "raid_level": "raid1", 00:39:22.658 "superblock": false, 00:39:22.658 "num_base_bdevs": 4, 00:39:22.658 "num_base_bdevs_discovered": 3, 00:39:22.658 "num_base_bdevs_operational": 3, 00:39:22.658 "base_bdevs_list": [ 00:39:22.658 { 00:39:22.658 "name": null, 00:39:22.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.658 "is_configured": false, 00:39:22.658 "data_offset": 0, 00:39:22.658 "data_size": 65536 00:39:22.658 }, 00:39:22.658 { 00:39:22.658 "name": "BaseBdev2", 00:39:22.658 "uuid": "6c9ecabe-719a-4ba3-932e-93f2e1bd2f67", 00:39:22.658 "is_configured": true, 00:39:22.658 "data_offset": 0, 00:39:22.658 "data_size": 65536 00:39:22.658 }, 00:39:22.658 { 00:39:22.658 "name": "BaseBdev3", 00:39:22.658 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:22.658 "is_configured": true, 00:39:22.658 "data_offset": 0, 00:39:22.658 "data_size": 65536 00:39:22.658 }, 00:39:22.658 { 00:39:22.658 "name": "BaseBdev4", 00:39:22.658 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:22.658 "is_configured": true, 00:39:22.658 "data_offset": 0, 00:39:22.658 "data_size": 65536 00:39:22.658 } 00:39:22.658 ] 00:39:22.658 }' 00:39:22.658 19:32:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:22.658 19:32:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:22.658 19:32:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:22.658 19:32:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:39:22.658 19:32:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:23.236 [2024-04-18 19:32:38.844128] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:39:23.236 [2024-04-18 19:32:38.844203] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:23.236 [2024-04-18 19:32:38.901220] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:39:23.236 19:32:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:39:23.236 [2024-04-18 19:32:38.903470] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:23.236 [2024-04-18 19:32:39.031162] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:23.516 [2024-04-18 19:32:39.269571] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:23.516 [2024-04-18 19:32:39.270264] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:24.082 [2024-04-18 19:32:39.757946] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:24.082 19:32:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:24.082 19:32:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:24.082 19:32:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:24.082 19:32:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:24.082 19:32:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:24.082 19:32:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:24.082 19:32:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:24.082 [2024-04-18 19:32:39.996944] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:24.082 [2024-04-18 19:32:39.998235] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:24.340 19:32:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:24.340 "name": "raid_bdev1", 00:39:24.341 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:24.341 "strip_size_kb": 0, 00:39:24.341 "state": "online", 00:39:24.341 "raid_level": "raid1", 00:39:24.341 "superblock": false, 00:39:24.341 "num_base_bdevs": 4, 00:39:24.341 "num_base_bdevs_discovered": 4, 00:39:24.341 "num_base_bdevs_operational": 4, 00:39:24.341 "process": { 00:39:24.341 "type": "rebuild", 00:39:24.341 "target": "spare", 00:39:24.341 "progress": { 00:39:24.341 "blocks": 14336, 00:39:24.341 "percent": 21 00:39:24.341 } 00:39:24.341 }, 00:39:24.341 "base_bdevs_list": [ 00:39:24.341 { 00:39:24.341 "name": "spare", 00:39:24.341 "uuid": "095fb62f-d560-5cb1-8851-65fa2c4de262", 00:39:24.341 "is_configured": true, 00:39:24.341 "data_offset": 0, 00:39:24.341 "data_size": 65536 00:39:24.341 }, 00:39:24.341 { 00:39:24.341 "name": "BaseBdev2", 00:39:24.341 "uuid": "6c9ecabe-719a-4ba3-932e-93f2e1bd2f67", 00:39:24.341 "is_configured": true, 00:39:24.341 "data_offset": 0, 00:39:24.341 "data_size": 65536 00:39:24.341 }, 00:39:24.341 { 00:39:24.341 "name": "BaseBdev3", 00:39:24.341 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:24.341 "is_configured": true, 00:39:24.341 "data_offset": 0, 00:39:24.341 "data_size": 65536 00:39:24.341 }, 00:39:24.341 { 00:39:24.341 "name": "BaseBdev4", 00:39:24.341 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:24.341 "is_configured": true, 00:39:24.341 "data_offset": 0, 00:39:24.341 "data_size": 65536 00:39:24.341 } 00:39:24.341 ] 00:39:24.341 }' 00:39:24.341 19:32:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:24.341 [2024-04-18 19:32:40.208967] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:39:24.341 [2024-04-18 19:32:40.209696] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:39:24.341 19:32:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:24.341 19:32:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:24.341 19:32:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:24.341 19:32:40 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:39:24.341 19:32:40 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:39:24.341 19:32:40 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:39:24.341 19:32:40 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:39:24.341 19:32:40 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:39:24.600 [2024-04-18 19:32:40.461295] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:24.858 [2024-04-18 19:32:40.543636] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:39:24.858 [2024-04-18 19:32:40.652758] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ad0 00:39:24.858 [2024-04-18 19:32:40.652812] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005d40 00:39:24.858 19:32:40 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:39:24.858 19:32:40 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:39:24.858 19:32:40 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:24.858 19:32:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:24.858 19:32:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:24.858 19:32:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:24.858 19:32:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:24.858 19:32:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:24.858 19:32:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:24.858 [2024-04-18 19:32:40.782565] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:25.116 "name": "raid_bdev1", 00:39:25.116 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:25.116 "strip_size_kb": 0, 00:39:25.116 "state": "online", 00:39:25.116 "raid_level": "raid1", 00:39:25.116 "superblock": false, 00:39:25.116 "num_base_bdevs": 4, 00:39:25.116 "num_base_bdevs_discovered": 3, 00:39:25.116 "num_base_bdevs_operational": 3, 00:39:25.116 "process": { 00:39:25.116 "type": "rebuild", 00:39:25.116 "target": "spare", 00:39:25.116 "progress": { 00:39:25.116 "blocks": 22528, 00:39:25.116 "percent": 34 00:39:25.116 } 00:39:25.116 }, 00:39:25.116 "base_bdevs_list": [ 00:39:25.116 { 00:39:25.116 "name": "spare", 00:39:25.116 "uuid": "095fb62f-d560-5cb1-8851-65fa2c4de262", 00:39:25.116 "is_configured": true, 00:39:25.116 "data_offset": 0, 00:39:25.116 "data_size": 65536 00:39:25.116 }, 00:39:25.116 { 00:39:25.116 "name": null, 00:39:25.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.116 "is_configured": false, 00:39:25.116 "data_offset": 0, 00:39:25.116 "data_size": 65536 00:39:25.116 }, 00:39:25.116 { 00:39:25.116 "name": "BaseBdev3", 00:39:25.116 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:25.116 "is_configured": true, 00:39:25.116 "data_offset": 0, 00:39:25.116 "data_size": 65536 00:39:25.116 }, 00:39:25.116 { 00:39:25.116 "name": "BaseBdev4", 00:39:25.116 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:25.116 "is_configured": true, 00:39:25.116 "data_offset": 0, 00:39:25.116 "data_size": 65536 00:39:25.116 } 00:39:25.116 ] 00:39:25.116 }' 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@657 -- # local timeout=605 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:25.116 19:32:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:25.375 [2024-04-18 19:32:41.147256] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:39:25.375 19:32:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:25.375 "name": "raid_bdev1", 00:39:25.375 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:25.375 "strip_size_kb": 0, 00:39:25.375 "state": "online", 00:39:25.375 "raid_level": "raid1", 00:39:25.375 "superblock": false, 00:39:25.375 "num_base_bdevs": 4, 00:39:25.375 "num_base_bdevs_discovered": 3, 00:39:25.375 "num_base_bdevs_operational": 3, 00:39:25.375 "process": { 00:39:25.375 "type": "rebuild", 00:39:25.375 "target": "spare", 00:39:25.375 "progress": { 00:39:25.375 "blocks": 26624, 00:39:25.375 "percent": 40 00:39:25.375 } 00:39:25.375 }, 00:39:25.375 "base_bdevs_list": [ 00:39:25.375 { 00:39:25.375 "name": "spare", 00:39:25.375 "uuid": "095fb62f-d560-5cb1-8851-65fa2c4de262", 00:39:25.375 "is_configured": true, 00:39:25.375 "data_offset": 0, 00:39:25.375 "data_size": 65536 00:39:25.375 }, 00:39:25.375 { 00:39:25.375 "name": null, 00:39:25.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.375 "is_configured": false, 00:39:25.375 "data_offset": 0, 00:39:25.375 "data_size": 65536 00:39:25.375 }, 00:39:25.375 { 00:39:25.375 "name": "BaseBdev3", 00:39:25.375 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:25.375 "is_configured": true, 00:39:25.375 "data_offset": 0, 00:39:25.375 "data_size": 65536 00:39:25.375 }, 00:39:25.375 { 00:39:25.375 "name": "BaseBdev4", 00:39:25.375 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:25.375 "is_configured": true, 00:39:25.375 "data_offset": 0, 00:39:25.375 "data_size": 65536 00:39:25.375 } 00:39:25.375 ] 00:39:25.375 }' 00:39:25.375 19:32:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:25.645 19:32:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:25.645 19:32:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:25.645 [2024-04-18 19:32:41.366087] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:39:25.645 19:32:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:25.645 19:32:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:39:25.909 [2024-04-18 19:32:41.598087] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:39:25.909 [2024-04-18 19:32:41.817237] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:39:26.475 [2024-04-18 19:32:42.164682] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:39:26.475 [2024-04-18 19:32:42.165222] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:39:26.475 19:32:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:39:26.475 19:32:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:26.475 19:32:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:26.475 19:32:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:26.475 19:32:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:26.475 19:32:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:26.475 19:32:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:26.475 19:32:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:26.734 [2024-04-18 19:32:42.630991] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:39:26.734 [2024-04-18 19:32:42.631659] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:39:26.992 19:32:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:26.992 "name": "raid_bdev1", 00:39:26.992 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:26.992 "strip_size_kb": 0, 00:39:26.992 "state": "online", 00:39:26.992 "raid_level": "raid1", 00:39:26.992 "superblock": false, 00:39:26.992 "num_base_bdevs": 4, 00:39:26.992 "num_base_bdevs_discovered": 3, 00:39:26.992 "num_base_bdevs_operational": 3, 00:39:26.992 "process": { 00:39:26.992 "type": "rebuild", 00:39:26.992 "target": "spare", 00:39:26.992 "progress": { 00:39:26.992 "blocks": 47104, 00:39:26.992 "percent": 71 00:39:26.992 } 00:39:26.992 }, 00:39:26.992 "base_bdevs_list": [ 00:39:26.992 { 00:39:26.992 "name": "spare", 00:39:26.992 "uuid": "095fb62f-d560-5cb1-8851-65fa2c4de262", 00:39:26.992 "is_configured": true, 00:39:26.992 "data_offset": 0, 00:39:26.992 "data_size": 65536 00:39:26.992 }, 00:39:26.992 { 00:39:26.992 "name": null, 00:39:26.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.992 "is_configured": false, 00:39:26.992 "data_offset": 0, 00:39:26.992 "data_size": 65536 00:39:26.992 }, 00:39:26.992 { 00:39:26.992 "name": "BaseBdev3", 00:39:26.992 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:26.992 "is_configured": true, 00:39:26.992 "data_offset": 0, 00:39:26.992 "data_size": 65536 00:39:26.992 }, 00:39:26.992 { 00:39:26.992 "name": "BaseBdev4", 00:39:26.992 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:26.992 "is_configured": true, 00:39:26.992 "data_offset": 0, 00:39:26.992 "data_size": 65536 00:39:26.992 } 00:39:26.992 ] 00:39:26.992 }' 00:39:26.992 19:32:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:26.992 19:32:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:26.992 19:32:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:26.992 19:32:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:26.992 19:32:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:39:27.250 [2024-04-18 19:32:42.955084] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:39:27.509 [2024-04-18 19:32:43.184157] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:39:27.509 [2024-04-18 19:32:43.184697] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:39:28.076 19:32:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:39:28.076 19:32:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:28.076 19:32:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:28.076 19:32:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:28.076 19:32:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:28.076 19:32:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:28.076 19:32:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:28.076 19:32:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:28.076 [2024-04-18 19:32:43.972132] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:28.333 [2024-04-18 19:32:44.068889] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:28.333 [2024-04-18 19:32:44.071270] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:28.333 19:32:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:28.333 "name": "raid_bdev1", 00:39:28.333 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:28.333 "strip_size_kb": 0, 00:39:28.333 "state": "online", 00:39:28.333 "raid_level": "raid1", 00:39:28.334 "superblock": false, 00:39:28.334 "num_base_bdevs": 4, 00:39:28.334 "num_base_bdevs_discovered": 3, 00:39:28.334 "num_base_bdevs_operational": 3, 00:39:28.334 "base_bdevs_list": [ 00:39:28.334 { 00:39:28.334 "name": "spare", 00:39:28.334 "uuid": "095fb62f-d560-5cb1-8851-65fa2c4de262", 00:39:28.334 "is_configured": true, 00:39:28.334 "data_offset": 0, 00:39:28.334 "data_size": 65536 00:39:28.334 }, 00:39:28.334 { 00:39:28.334 "name": null, 00:39:28.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.334 "is_configured": false, 00:39:28.334 "data_offset": 0, 00:39:28.334 "data_size": 65536 00:39:28.334 }, 00:39:28.334 { 00:39:28.334 "name": "BaseBdev3", 00:39:28.334 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:28.334 "is_configured": true, 00:39:28.334 "data_offset": 0, 00:39:28.334 "data_size": 65536 00:39:28.334 }, 00:39:28.334 { 00:39:28.334 "name": "BaseBdev4", 00:39:28.334 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:28.334 "is_configured": true, 00:39:28.334 "data_offset": 0, 00:39:28.334 "data_size": 65536 00:39:28.334 } 00:39:28.334 ] 00:39:28.334 }' 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@660 -- # break 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:28.334 19:32:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:28.593 19:32:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:28.593 "name": "raid_bdev1", 00:39:28.593 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:28.593 "strip_size_kb": 0, 00:39:28.593 "state": "online", 00:39:28.593 "raid_level": "raid1", 00:39:28.593 "superblock": false, 00:39:28.593 "num_base_bdevs": 4, 00:39:28.593 "num_base_bdevs_discovered": 3, 00:39:28.593 "num_base_bdevs_operational": 3, 00:39:28.593 "base_bdevs_list": [ 00:39:28.593 { 00:39:28.593 "name": "spare", 00:39:28.593 "uuid": "095fb62f-d560-5cb1-8851-65fa2c4de262", 00:39:28.593 "is_configured": true, 00:39:28.593 "data_offset": 0, 00:39:28.593 "data_size": 65536 00:39:28.593 }, 00:39:28.593 { 00:39:28.593 "name": null, 00:39:28.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.593 "is_configured": false, 00:39:28.593 "data_offset": 0, 00:39:28.593 "data_size": 65536 00:39:28.593 }, 00:39:28.593 { 00:39:28.593 "name": "BaseBdev3", 00:39:28.593 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:28.593 "is_configured": true, 00:39:28.593 "data_offset": 0, 00:39:28.593 "data_size": 65536 00:39:28.593 }, 00:39:28.593 { 00:39:28.593 "name": "BaseBdev4", 00:39:28.593 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:28.593 "is_configured": true, 00:39:28.593 "data_offset": 0, 00:39:28.593 "data_size": 65536 00:39:28.593 } 00:39:28.593 ] 00:39:28.593 }' 00:39:28.593 19:32:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:28.593 19:32:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:28.593 19:32:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:28.852 19:32:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:28.852 "name": "raid_bdev1", 00:39:28.852 "uuid": "2dbdb5f5-6eee-442f-9647-06039403f3ab", 00:39:28.852 "strip_size_kb": 0, 00:39:28.852 "state": "online", 00:39:28.852 "raid_level": "raid1", 00:39:28.852 "superblock": false, 00:39:28.852 "num_base_bdevs": 4, 00:39:28.852 "num_base_bdevs_discovered": 3, 00:39:28.852 "num_base_bdevs_operational": 3, 00:39:28.852 "base_bdevs_list": [ 00:39:28.852 { 00:39:28.852 "name": "spare", 00:39:28.852 "uuid": "095fb62f-d560-5cb1-8851-65fa2c4de262", 00:39:28.853 "is_configured": true, 00:39:28.853 "data_offset": 0, 00:39:28.853 "data_size": 65536 00:39:28.853 }, 00:39:28.853 { 00:39:28.853 "name": null, 00:39:28.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.853 "is_configured": false, 00:39:28.853 "data_offset": 0, 00:39:28.853 "data_size": 65536 00:39:28.853 }, 00:39:28.853 { 00:39:28.853 "name": "BaseBdev3", 00:39:28.853 "uuid": "9eb59ead-ab60-43b4-89f4-a79a046e7535", 00:39:28.853 "is_configured": true, 00:39:28.853 "data_offset": 0, 00:39:28.853 "data_size": 65536 00:39:28.853 }, 00:39:28.853 { 00:39:28.853 "name": "BaseBdev4", 00:39:28.853 "uuid": "19d71118-80ed-4aca-a00a-0da1ce16ab39", 00:39:28.853 "is_configured": true, 00:39:28.853 "data_offset": 0, 00:39:28.853 "data_size": 65536 00:39:28.853 } 00:39:28.853 ] 00:39:28.853 }' 00:39:28.853 19:32:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:28.853 19:32:44 -- common/autotest_common.sh@10 -- # set +x 00:39:29.787 19:32:45 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:29.787 [2024-04-18 19:32:45.620076] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:29.787 [2024-04-18 19:32:45.620123] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:30.045 00:39:30.045 Latency(us) 00:39:30.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:30.045 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:39:30.045 raid_bdev1 : 12.10 107.28 321.85 0.00 0.00 13076.18 456.41 122833.19 00:39:30.045 =================================================================================================================== 00:39:30.045 Total : 107.28 321.85 0.00 0.00 13076.18 456.41 122833.19 00:39:30.045 [2024-04-18 19:32:45.743044] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:30.045 [2024-04-18 19:32:45.743100] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:30.045 [2024-04-18 19:32:45.743190] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:30.045 [2024-04-18 19:32:45.743201] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:39:30.045 0 00:39:30.045 19:32:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:39:30.046 19:32:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:30.304 19:32:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:39:30.304 19:32:45 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:39:30.304 19:32:45 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:39:30.304 19:32:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:30.304 19:32:45 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:39:30.304 19:32:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:30.304 19:32:45 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:39:30.304 19:32:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:30.304 19:32:45 -- bdev/nbd_common.sh@12 -- # local i 00:39:30.304 19:32:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:30.304 19:32:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:30.304 19:32:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:39:30.563 /dev/nbd0 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:30.563 19:32:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:39:30.563 19:32:46 -- common/autotest_common.sh@855 -- # local i 00:39:30.563 19:32:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:30.563 19:32:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:30.563 19:32:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:39:30.563 19:32:46 -- common/autotest_common.sh@859 -- # break 00:39:30.563 19:32:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:30.563 19:32:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:30.563 19:32:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:30.563 1+0 records in 00:39:30.563 1+0 records out 00:39:30.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317443 s, 12.9 MB/s 00:39:30.563 19:32:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:30.563 19:32:46 -- common/autotest_common.sh@872 -- # size=4096 00:39:30.563 19:32:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:30.563 19:32:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:30.563 19:32:46 -- common/autotest_common.sh@875 -- # return 0 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:30.563 19:32:46 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:39:30.563 19:32:46 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:39:30.563 19:32:46 -- bdev/bdev_raid.sh@678 -- # continue 00:39:30.563 19:32:46 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:39:30.563 19:32:46 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:39:30.563 19:32:46 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@12 -- # local i 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:30.563 19:32:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:39:30.821 /dev/nbd1 00:39:30.821 19:32:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:30.821 19:32:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:30.821 19:32:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:39:30.821 19:32:46 -- common/autotest_common.sh@855 -- # local i 00:39:30.821 19:32:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:30.821 19:32:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:30.821 19:32:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:39:30.821 19:32:46 -- common/autotest_common.sh@859 -- # break 00:39:30.821 19:32:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:30.821 19:32:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:30.821 19:32:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:30.821 1+0 records in 00:39:30.821 1+0 records out 00:39:30.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526592 s, 7.8 MB/s 00:39:30.821 19:32:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:30.821 19:32:46 -- common/autotest_common.sh@872 -- # size=4096 00:39:30.821 19:32:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:30.821 19:32:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:30.821 19:32:46 -- common/autotest_common.sh@875 -- # return 0 00:39:30.821 19:32:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:30.821 19:32:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:30.821 19:32:46 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:39:31.080 19:32:46 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:39:31.080 19:32:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:31.080 19:32:46 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:39:31.080 19:32:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:31.080 19:32:46 -- bdev/nbd_common.sh@51 -- # local i 00:39:31.080 19:32:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:31.080 19:32:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@41 -- # break 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@45 -- # return 0 00:39:31.339 19:32:47 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:39:31.339 19:32:47 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:39:31.339 19:32:47 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@12 -- # local i 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:31.339 19:32:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:39:31.597 /dev/nbd1 00:39:31.597 19:32:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:31.597 19:32:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:31.597 19:32:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:39:31.597 19:32:47 -- common/autotest_common.sh@855 -- # local i 00:39:31.597 19:32:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:31.597 19:32:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:31.597 19:32:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:39:31.597 19:32:47 -- common/autotest_common.sh@859 -- # break 00:39:31.597 19:32:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:31.597 19:32:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:31.597 19:32:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:31.597 1+0 records in 00:39:31.597 1+0 records out 00:39:31.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380791 s, 10.8 MB/s 00:39:31.597 19:32:47 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:31.597 19:32:47 -- common/autotest_common.sh@872 -- # size=4096 00:39:31.597 19:32:47 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:31.597 19:32:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:31.597 19:32:47 -- common/autotest_common.sh@875 -- # return 0 00:39:31.597 19:32:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:31.597 19:32:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:31.597 19:32:47 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:39:31.597 19:32:47 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:39:31.597 19:32:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:31.597 19:32:47 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:39:31.597 19:32:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:31.597 19:32:47 -- bdev/nbd_common.sh@51 -- # local i 00:39:31.597 19:32:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:31.598 19:32:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:39:31.883 19:32:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:31.883 19:32:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:31.883 19:32:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:31.883 19:32:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:31.883 19:32:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:31.883 19:32:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:31.883 19:32:47 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@41 -- # break 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@45 -- # return 0 00:39:32.141 19:32:47 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@51 -- # local i 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:32.141 19:32:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@41 -- # break 00:39:32.399 19:32:48 -- bdev/nbd_common.sh@45 -- # return 0 00:39:32.399 19:32:48 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:39:32.399 19:32:48 -- bdev/bdev_raid.sh@709 -- # killprocess 136357 00:39:32.399 19:32:48 -- common/autotest_common.sh@936 -- # '[' -z 136357 ']' 00:39:32.399 19:32:48 -- common/autotest_common.sh@940 -- # kill -0 136357 00:39:32.399 19:32:48 -- common/autotest_common.sh@941 -- # uname 00:39:32.399 19:32:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:39:32.399 19:32:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136357 00:39:32.399 19:32:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:39:32.399 19:32:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:39:32.399 19:32:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136357' 00:39:32.399 killing process with pid 136357 00:39:32.399 19:32:48 -- common/autotest_common.sh@955 -- # kill 136357 00:39:32.399 Received shutdown signal, test time was about 14.636668 seconds 00:39:32.399 00:39:32.399 Latency(us) 00:39:32.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:32.400 =================================================================================================================== 00:39:32.400 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:32.400 19:32:48 -- common/autotest_common.sh@960 -- # wait 136357 00:39:32.400 [2024-04-18 19:32:48.253984] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:32.966 [2024-04-18 19:32:48.756612] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:34.877 ************************************ 00:39:34.877 END TEST raid_rebuild_test_io 00:39:34.877 ************************************ 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@711 -- # return 0 00:39:34.877 00:39:34.877 real 0m21.695s 00:39:34.877 user 0m33.259s 00:39:34.877 sys 0m2.841s 00:39:34.877 19:32:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:34.877 19:32:50 -- common/autotest_common.sh@10 -- # set +x 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:39:34.877 19:32:50 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:39:34.877 19:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:34.877 19:32:50 -- common/autotest_common.sh@10 -- # set +x 00:39:34.877 ************************************ 00:39:34.877 START TEST raid_rebuild_test_sb_io 00:39:34.877 ************************************ 00:39:34.877 19:32:50 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 true true 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@544 -- # raid_pid=136939 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:34.877 19:32:50 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136939 /var/tmp/spdk-raid.sock 00:39:34.877 19:32:50 -- common/autotest_common.sh@817 -- # '[' -z 136939 ']' 00:39:34.877 19:32:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:34.877 19:32:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:39:34.877 19:32:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:34.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:34.877 19:32:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:39:34.877 19:32:50 -- common/autotest_common.sh@10 -- # set +x 00:39:34.877 [2024-04-18 19:32:50.480065] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:39:34.877 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:34.877 Zero copy mechanism will not be used. 00:39:34.877 [2024-04-18 19:32:50.480213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136939 ] 00:39:34.877 [2024-04-18 19:32:50.641941] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.135 [2024-04-18 19:32:50.879895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.393 [2024-04-18 19:32:51.138587] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:35.652 19:32:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:39:35.652 19:32:51 -- common/autotest_common.sh@850 -- # return 0 00:39:35.652 19:32:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:35.652 19:32:51 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:39:35.652 19:32:51 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:39:35.910 BaseBdev1_malloc 00:39:35.910 19:32:51 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:36.169 [2024-04-18 19:32:51.931918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:36.169 [2024-04-18 19:32:51.932047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:36.169 [2024-04-18 19:32:51.932083] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:39:36.169 [2024-04-18 19:32:51.932129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:36.169 [2024-04-18 19:32:51.934787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:36.169 [2024-04-18 19:32:51.934854] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:36.169 BaseBdev1 00:39:36.169 19:32:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:36.169 19:32:51 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:39:36.169 19:32:51 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:39:36.427 BaseBdev2_malloc 00:39:36.427 19:32:52 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:36.685 [2024-04-18 19:32:52.564380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:36.685 [2024-04-18 19:32:52.564485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:36.685 [2024-04-18 19:32:52.564529] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:39:36.685 [2024-04-18 19:32:52.564582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:36.685 [2024-04-18 19:32:52.567096] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:36.685 [2024-04-18 19:32:52.567154] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:36.685 BaseBdev2 00:39:36.685 19:32:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:36.685 19:32:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:39:36.685 19:32:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:39:36.943 BaseBdev3_malloc 00:39:36.943 19:32:52 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:39:37.202 [2024-04-18 19:32:53.100711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:39:37.202 [2024-04-18 19:32:53.100824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:37.202 [2024-04-18 19:32:53.100865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:39:37.202 [2024-04-18 19:32:53.100907] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:37.202 [2024-04-18 19:32:53.103463] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:37.202 [2024-04-18 19:32:53.103531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:37.202 BaseBdev3 00:39:37.202 19:32:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:37.202 19:32:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:39:37.202 19:32:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:39:37.769 BaseBdev4_malloc 00:39:37.769 19:32:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:39:38.029 [2024-04-18 19:32:53.729934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:39:38.029 [2024-04-18 19:32:53.730091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:38.029 [2024-04-18 19:32:53.730136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:39:38.029 [2024-04-18 19:32:53.730208] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:38.029 [2024-04-18 19:32:53.732889] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:38.029 [2024-04-18 19:32:53.732965] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:39:38.029 BaseBdev4 00:39:38.029 19:32:53 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:39:38.301 spare_malloc 00:39:38.301 19:32:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:38.558 spare_delay 00:39:38.558 19:32:54 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:38.816 [2024-04-18 19:32:54.604797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:38.816 [2024-04-18 19:32:54.604917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:38.816 [2024-04-18 19:32:54.604955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:38.816 [2024-04-18 19:32:54.605016] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:38.816 [2024-04-18 19:32:54.607649] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:38.816 [2024-04-18 19:32:54.607733] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:38.816 spare 00:39:38.816 19:32:54 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:39:39.073 [2024-04-18 19:32:54.824963] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:39.073 [2024-04-18 19:32:54.827401] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:39.073 [2024-04-18 19:32:54.827501] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:39.073 [2024-04-18 19:32:54.827554] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:39.073 [2024-04-18 19:32:54.827791] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:39:39.073 [2024-04-18 19:32:54.827803] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:39.073 [2024-04-18 19:32:54.827978] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:39:39.073 [2024-04-18 19:32:54.828492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:39:39.073 [2024-04-18 19:32:54.828524] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:39:39.073 [2024-04-18 19:32:54.828752] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:39.073 19:32:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:39.332 19:32:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:39.332 "name": "raid_bdev1", 00:39:39.332 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:39.332 "strip_size_kb": 0, 00:39:39.332 "state": "online", 00:39:39.332 "raid_level": "raid1", 00:39:39.332 "superblock": true, 00:39:39.332 "num_base_bdevs": 4, 00:39:39.332 "num_base_bdevs_discovered": 4, 00:39:39.332 "num_base_bdevs_operational": 4, 00:39:39.332 "base_bdevs_list": [ 00:39:39.332 { 00:39:39.332 "name": "BaseBdev1", 00:39:39.332 "uuid": "5aaf679b-3047-5b61-a7c1-96b49ac6e838", 00:39:39.332 "is_configured": true, 00:39:39.332 "data_offset": 2048, 00:39:39.332 "data_size": 63488 00:39:39.332 }, 00:39:39.332 { 00:39:39.332 "name": "BaseBdev2", 00:39:39.332 "uuid": "28e6a409-1109-5662-9d55-df4ddb6d1f2a", 00:39:39.332 "is_configured": true, 00:39:39.332 "data_offset": 2048, 00:39:39.332 "data_size": 63488 00:39:39.332 }, 00:39:39.332 { 00:39:39.332 "name": "BaseBdev3", 00:39:39.332 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:39.332 "is_configured": true, 00:39:39.332 "data_offset": 2048, 00:39:39.332 "data_size": 63488 00:39:39.332 }, 00:39:39.332 { 00:39:39.332 "name": "BaseBdev4", 00:39:39.332 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:39.332 "is_configured": true, 00:39:39.332 "data_offset": 2048, 00:39:39.332 "data_size": 63488 00:39:39.332 } 00:39:39.332 ] 00:39:39.332 }' 00:39:39.332 19:32:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:39.332 19:32:55 -- common/autotest_common.sh@10 -- # set +x 00:39:40.265 19:32:55 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:40.265 19:32:55 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:39:40.265 [2024-04-18 19:32:56.113584] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:40.265 19:32:56 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:39:40.265 19:32:56 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:40.265 19:32:56 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:40.523 19:32:56 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:39:40.523 19:32:56 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:39:40.523 19:32:56 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:39:40.523 19:32:56 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:39:40.781 [2024-04-18 19:32:56.539498] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:40.781 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:40.781 Zero copy mechanism will not be used. 00:39:40.781 Running I/O for 60 seconds... 00:39:40.781 [2024-04-18 19:32:56.615678] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:40.781 [2024-04-18 19:32:56.622973] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:40.781 19:32:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:41.040 19:32:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:41.040 "name": "raid_bdev1", 00:39:41.040 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:41.040 "strip_size_kb": 0, 00:39:41.040 "state": "online", 00:39:41.040 "raid_level": "raid1", 00:39:41.040 "superblock": true, 00:39:41.040 "num_base_bdevs": 4, 00:39:41.040 "num_base_bdevs_discovered": 3, 00:39:41.040 "num_base_bdevs_operational": 3, 00:39:41.040 "base_bdevs_list": [ 00:39:41.040 { 00:39:41.040 "name": null, 00:39:41.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:41.040 "is_configured": false, 00:39:41.040 "data_offset": 2048, 00:39:41.040 "data_size": 63488 00:39:41.040 }, 00:39:41.040 { 00:39:41.040 "name": "BaseBdev2", 00:39:41.040 "uuid": "28e6a409-1109-5662-9d55-df4ddb6d1f2a", 00:39:41.040 "is_configured": true, 00:39:41.040 "data_offset": 2048, 00:39:41.040 "data_size": 63488 00:39:41.040 }, 00:39:41.040 { 00:39:41.040 "name": "BaseBdev3", 00:39:41.040 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:41.040 "is_configured": true, 00:39:41.040 "data_offset": 2048, 00:39:41.040 "data_size": 63488 00:39:41.040 }, 00:39:41.040 { 00:39:41.040 "name": "BaseBdev4", 00:39:41.040 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:41.040 "is_configured": true, 00:39:41.040 "data_offset": 2048, 00:39:41.040 "data_size": 63488 00:39:41.040 } 00:39:41.040 ] 00:39:41.040 }' 00:39:41.040 19:32:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:41.040 19:32:56 -- common/autotest_common.sh@10 -- # set +x 00:39:41.975 19:32:57 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:42.233 [2024-04-18 19:32:57.997609] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:39:42.233 [2024-04-18 19:32:57.997673] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:42.233 19:32:58 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:39:42.233 [2024-04-18 19:32:58.058867] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:42.233 [2024-04-18 19:32:58.061291] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:42.490 [2024-04-18 19:32:58.182173] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:42.490 [2024-04-18 19:32:58.182855] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:42.490 [2024-04-18 19:32:58.294271] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:42.490 [2024-04-18 19:32:58.295151] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:43.055 [2024-04-18 19:32:58.673523] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:43.055 [2024-04-18 19:32:58.674334] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:43.055 [2024-04-18 19:32:58.796788] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:43.055 [2024-04-18 19:32:58.797161] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:43.313 19:32:59 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:43.313 19:32:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:43.313 19:32:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:43.313 19:32:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:43.313 19:32:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:43.313 19:32:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:43.313 19:32:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:43.313 [2024-04-18 19:32:59.076829] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:43.313 [2024-04-18 19:32:59.077439] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:43.571 19:32:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:43.571 "name": "raid_bdev1", 00:39:43.571 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:43.571 "strip_size_kb": 0, 00:39:43.571 "state": "online", 00:39:43.571 "raid_level": "raid1", 00:39:43.571 "superblock": true, 00:39:43.571 "num_base_bdevs": 4, 00:39:43.571 "num_base_bdevs_discovered": 4, 00:39:43.571 "num_base_bdevs_operational": 4, 00:39:43.571 "process": { 00:39:43.571 "type": "rebuild", 00:39:43.571 "target": "spare", 00:39:43.571 "progress": { 00:39:43.571 "blocks": 14336, 00:39:43.571 "percent": 22 00:39:43.571 } 00:39:43.571 }, 00:39:43.571 "base_bdevs_list": [ 00:39:43.571 { 00:39:43.571 "name": "spare", 00:39:43.571 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:43.571 "is_configured": true, 00:39:43.571 "data_offset": 2048, 00:39:43.571 "data_size": 63488 00:39:43.571 }, 00:39:43.571 { 00:39:43.571 "name": "BaseBdev2", 00:39:43.571 "uuid": "28e6a409-1109-5662-9d55-df4ddb6d1f2a", 00:39:43.571 "is_configured": true, 00:39:43.571 "data_offset": 2048, 00:39:43.571 "data_size": 63488 00:39:43.571 }, 00:39:43.571 { 00:39:43.571 "name": "BaseBdev3", 00:39:43.571 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:43.571 "is_configured": true, 00:39:43.571 "data_offset": 2048, 00:39:43.571 "data_size": 63488 00:39:43.571 }, 00:39:43.571 { 00:39:43.571 "name": "BaseBdev4", 00:39:43.571 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:43.571 "is_configured": true, 00:39:43.571 "data_offset": 2048, 00:39:43.571 "data_size": 63488 00:39:43.571 } 00:39:43.571 ] 00:39:43.571 }' 00:39:43.571 19:32:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:43.571 [2024-04-18 19:32:59.307255] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:39:43.571 19:32:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:43.571 19:32:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:43.571 19:32:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:43.571 19:32:59 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:43.829 [2024-04-18 19:32:59.704274] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:39:43.829 [2024-04-18 19:32:59.705002] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:39:43.829 [2024-04-18 19:32:59.743790] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:44.087 [2024-04-18 19:32:59.925659] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:44.087 [2024-04-18 19:32:59.939413] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:44.087 [2024-04-18 19:32:59.969337] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:44.087 19:33:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:44.653 19:33:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:44.653 "name": "raid_bdev1", 00:39:44.653 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:44.653 "strip_size_kb": 0, 00:39:44.653 "state": "online", 00:39:44.653 "raid_level": "raid1", 00:39:44.653 "superblock": true, 00:39:44.653 "num_base_bdevs": 4, 00:39:44.653 "num_base_bdevs_discovered": 3, 00:39:44.653 "num_base_bdevs_operational": 3, 00:39:44.653 "base_bdevs_list": [ 00:39:44.653 { 00:39:44.653 "name": null, 00:39:44.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:44.653 "is_configured": false, 00:39:44.653 "data_offset": 2048, 00:39:44.653 "data_size": 63488 00:39:44.653 }, 00:39:44.653 { 00:39:44.653 "name": "BaseBdev2", 00:39:44.653 "uuid": "28e6a409-1109-5662-9d55-df4ddb6d1f2a", 00:39:44.653 "is_configured": true, 00:39:44.653 "data_offset": 2048, 00:39:44.653 "data_size": 63488 00:39:44.653 }, 00:39:44.653 { 00:39:44.653 "name": "BaseBdev3", 00:39:44.653 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:44.653 "is_configured": true, 00:39:44.653 "data_offset": 2048, 00:39:44.653 "data_size": 63488 00:39:44.653 }, 00:39:44.653 { 00:39:44.653 "name": "BaseBdev4", 00:39:44.653 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:44.653 "is_configured": true, 00:39:44.653 "data_offset": 2048, 00:39:44.653 "data_size": 63488 00:39:44.653 } 00:39:44.653 ] 00:39:44.653 }' 00:39:44.653 19:33:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:44.653 19:33:00 -- common/autotest_common.sh@10 -- # set +x 00:39:45.219 19:33:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:45.219 19:33:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:45.219 19:33:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:39:45.219 19:33:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:39:45.219 19:33:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:45.219 19:33:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:45.219 19:33:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:45.786 19:33:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:45.786 "name": "raid_bdev1", 00:39:45.786 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:45.786 "strip_size_kb": 0, 00:39:45.786 "state": "online", 00:39:45.786 "raid_level": "raid1", 00:39:45.786 "superblock": true, 00:39:45.786 "num_base_bdevs": 4, 00:39:45.786 "num_base_bdevs_discovered": 3, 00:39:45.786 "num_base_bdevs_operational": 3, 00:39:45.786 "base_bdevs_list": [ 00:39:45.786 { 00:39:45.786 "name": null, 00:39:45.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:45.786 "is_configured": false, 00:39:45.786 "data_offset": 2048, 00:39:45.786 "data_size": 63488 00:39:45.786 }, 00:39:45.786 { 00:39:45.786 "name": "BaseBdev2", 00:39:45.787 "uuid": "28e6a409-1109-5662-9d55-df4ddb6d1f2a", 00:39:45.787 "is_configured": true, 00:39:45.787 "data_offset": 2048, 00:39:45.787 "data_size": 63488 00:39:45.787 }, 00:39:45.787 { 00:39:45.787 "name": "BaseBdev3", 00:39:45.787 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:45.787 "is_configured": true, 00:39:45.787 "data_offset": 2048, 00:39:45.787 "data_size": 63488 00:39:45.787 }, 00:39:45.787 { 00:39:45.787 "name": "BaseBdev4", 00:39:45.787 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:45.787 "is_configured": true, 00:39:45.787 "data_offset": 2048, 00:39:45.787 "data_size": 63488 00:39:45.787 } 00:39:45.787 ] 00:39:45.787 }' 00:39:45.787 19:33:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:45.787 19:33:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:45.787 19:33:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:45.787 19:33:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:39:45.787 19:33:01 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:46.046 [2024-04-18 19:33:01.747071] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:39:46.046 [2024-04-18 19:33:01.747146] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:46.046 19:33:01 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:39:46.046 [2024-04-18 19:33:01.825920] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:39:46.046 [2024-04-18 19:33:01.828353] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:46.304 [2024-04-18 19:33:02.109626] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:46.304 [2024-04-18 19:33:02.109992] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:46.563 [2024-04-18 19:33:02.484887] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:46.563 [2024-04-18 19:33:02.487281] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:47.130 [2024-04-18 19:33:02.750809] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:47.130 19:33:02 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:47.130 19:33:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:47.130 19:33:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:47.130 19:33:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:47.130 19:33:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:47.130 19:33:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:47.130 19:33:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:47.130 [2024-04-18 19:33:02.997734] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:47.130 [2024-04-18 19:33:02.998416] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:47.389 "name": "raid_bdev1", 00:39:47.389 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:47.389 "strip_size_kb": 0, 00:39:47.389 "state": "online", 00:39:47.389 "raid_level": "raid1", 00:39:47.389 "superblock": true, 00:39:47.389 "num_base_bdevs": 4, 00:39:47.389 "num_base_bdevs_discovered": 4, 00:39:47.389 "num_base_bdevs_operational": 4, 00:39:47.389 "process": { 00:39:47.389 "type": "rebuild", 00:39:47.389 "target": "spare", 00:39:47.389 "progress": { 00:39:47.389 "blocks": 14336, 00:39:47.389 "percent": 22 00:39:47.389 } 00:39:47.389 }, 00:39:47.389 "base_bdevs_list": [ 00:39:47.389 { 00:39:47.389 "name": "spare", 00:39:47.389 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:47.389 "is_configured": true, 00:39:47.389 "data_offset": 2048, 00:39:47.389 "data_size": 63488 00:39:47.389 }, 00:39:47.389 { 00:39:47.389 "name": "BaseBdev2", 00:39:47.389 "uuid": "28e6a409-1109-5662-9d55-df4ddb6d1f2a", 00:39:47.389 "is_configured": true, 00:39:47.389 "data_offset": 2048, 00:39:47.389 "data_size": 63488 00:39:47.389 }, 00:39:47.389 { 00:39:47.389 "name": "BaseBdev3", 00:39:47.389 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:47.389 "is_configured": true, 00:39:47.389 "data_offset": 2048, 00:39:47.389 "data_size": 63488 00:39:47.389 }, 00:39:47.389 { 00:39:47.389 "name": "BaseBdev4", 00:39:47.389 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:47.389 "is_configured": true, 00:39:47.389 "data_offset": 2048, 00:39:47.389 "data_size": 63488 00:39:47.389 } 00:39:47.389 ] 00:39:47.389 }' 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:39:47.389 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:39:47.389 [2024-04-18 19:33:03.241871] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:39:47.389 19:33:03 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:39:47.646 [2024-04-18 19:33:03.522132] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:47.905 [2024-04-18 19:33:03.606918] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:39:47.905 [2024-04-18 19:33:03.607241] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:39:47.905 [2024-04-18 19:33:03.717183] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005e10 00:39:47.905 [2024-04-18 19:33:03.717240] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:39:48.164 19:33:03 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:39:48.164 19:33:03 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:39:48.164 19:33:03 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:48.164 19:33:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:48.164 19:33:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:48.164 19:33:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:48.164 19:33:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:48.164 19:33:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:48.164 19:33:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:48.164 [2024-04-18 19:33:03.987135] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:48.422 "name": "raid_bdev1", 00:39:48.422 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:48.422 "strip_size_kb": 0, 00:39:48.422 "state": "online", 00:39:48.422 "raid_level": "raid1", 00:39:48.422 "superblock": true, 00:39:48.422 "num_base_bdevs": 4, 00:39:48.422 "num_base_bdevs_discovered": 3, 00:39:48.422 "num_base_bdevs_operational": 3, 00:39:48.422 "process": { 00:39:48.422 "type": "rebuild", 00:39:48.422 "target": "spare", 00:39:48.422 "progress": { 00:39:48.422 "blocks": 28672, 00:39:48.422 "percent": 45 00:39:48.422 } 00:39:48.422 }, 00:39:48.422 "base_bdevs_list": [ 00:39:48.422 { 00:39:48.422 "name": "spare", 00:39:48.422 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:48.422 "is_configured": true, 00:39:48.422 "data_offset": 2048, 00:39:48.422 "data_size": 63488 00:39:48.422 }, 00:39:48.422 { 00:39:48.422 "name": null, 00:39:48.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:48.422 "is_configured": false, 00:39:48.422 "data_offset": 2048, 00:39:48.422 "data_size": 63488 00:39:48.422 }, 00:39:48.422 { 00:39:48.422 "name": "BaseBdev3", 00:39:48.422 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:48.422 "is_configured": true, 00:39:48.422 "data_offset": 2048, 00:39:48.422 "data_size": 63488 00:39:48.422 }, 00:39:48.422 { 00:39:48.422 "name": "BaseBdev4", 00:39:48.422 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:48.422 "is_configured": true, 00:39:48.422 "data_offset": 2048, 00:39:48.422 "data_size": 63488 00:39:48.422 } 00:39:48.422 ] 00:39:48.422 }' 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@657 -- # local timeout=629 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:48.422 19:33:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:48.697 [2024-04-18 19:33:04.440255] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:39:48.697 19:33:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:48.697 "name": "raid_bdev1", 00:39:48.697 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:48.697 "strip_size_kb": 0, 00:39:48.697 "state": "online", 00:39:48.697 "raid_level": "raid1", 00:39:48.697 "superblock": true, 00:39:48.697 "num_base_bdevs": 4, 00:39:48.697 "num_base_bdevs_discovered": 3, 00:39:48.697 "num_base_bdevs_operational": 3, 00:39:48.697 "process": { 00:39:48.697 "type": "rebuild", 00:39:48.697 "target": "spare", 00:39:48.697 "progress": { 00:39:48.697 "blocks": 34816, 00:39:48.697 "percent": 54 00:39:48.697 } 00:39:48.697 }, 00:39:48.697 "base_bdevs_list": [ 00:39:48.697 { 00:39:48.697 "name": "spare", 00:39:48.697 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:48.697 "is_configured": true, 00:39:48.697 "data_offset": 2048, 00:39:48.697 "data_size": 63488 00:39:48.697 }, 00:39:48.697 { 00:39:48.697 "name": null, 00:39:48.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:48.697 "is_configured": false, 00:39:48.697 "data_offset": 2048, 00:39:48.697 "data_size": 63488 00:39:48.697 }, 00:39:48.697 { 00:39:48.697 "name": "BaseBdev3", 00:39:48.697 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:48.697 "is_configured": true, 00:39:48.697 "data_offset": 2048, 00:39:48.697 "data_size": 63488 00:39:48.697 }, 00:39:48.697 { 00:39:48.697 "name": "BaseBdev4", 00:39:48.697 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:48.697 "is_configured": true, 00:39:48.697 "data_offset": 2048, 00:39:48.697 "data_size": 63488 00:39:48.697 } 00:39:48.697 ] 00:39:48.697 }' 00:39:48.697 19:33:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:48.697 19:33:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:48.697 19:33:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:48.697 19:33:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:48.697 19:33:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:39:49.276 [2024-04-18 19:33:05.103674] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:39:49.842 19:33:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:39:49.842 19:33:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:49.842 19:33:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:49.842 19:33:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:49.842 19:33:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:49.842 19:33:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:49.842 19:33:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:49.842 19:33:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:50.100 19:33:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:50.100 "name": "raid_bdev1", 00:39:50.100 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:50.100 "strip_size_kb": 0, 00:39:50.100 "state": "online", 00:39:50.100 "raid_level": "raid1", 00:39:50.100 "superblock": true, 00:39:50.100 "num_base_bdevs": 4, 00:39:50.100 "num_base_bdevs_discovered": 3, 00:39:50.100 "num_base_bdevs_operational": 3, 00:39:50.100 "process": { 00:39:50.100 "type": "rebuild", 00:39:50.100 "target": "spare", 00:39:50.100 "progress": { 00:39:50.100 "blocks": 55296, 00:39:50.100 "percent": 87 00:39:50.100 } 00:39:50.100 }, 00:39:50.100 "base_bdevs_list": [ 00:39:50.100 { 00:39:50.100 "name": "spare", 00:39:50.100 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:50.100 "is_configured": true, 00:39:50.100 "data_offset": 2048, 00:39:50.100 "data_size": 63488 00:39:50.100 }, 00:39:50.100 { 00:39:50.100 "name": null, 00:39:50.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:50.100 "is_configured": false, 00:39:50.100 "data_offset": 2048, 00:39:50.100 "data_size": 63488 00:39:50.100 }, 00:39:50.100 { 00:39:50.100 "name": "BaseBdev3", 00:39:50.100 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:50.100 "is_configured": true, 00:39:50.100 "data_offset": 2048, 00:39:50.100 "data_size": 63488 00:39:50.100 }, 00:39:50.100 { 00:39:50.100 "name": "BaseBdev4", 00:39:50.100 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:50.100 "is_configured": true, 00:39:50.100 "data_offset": 2048, 00:39:50.100 "data_size": 63488 00:39:50.100 } 00:39:50.100 ] 00:39:50.100 }' 00:39:50.100 19:33:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:50.100 19:33:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:50.100 19:33:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:50.100 19:33:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:39:50.100 19:33:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:39:50.100 [2024-04-18 19:33:05.982753] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:39:50.358 [2024-04-18 19:33:06.214102] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:50.615 [2024-04-18 19:33:06.321260] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:50.615 [2024-04-18 19:33:06.324409] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:51.180 19:33:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:39:51.180 19:33:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:51.180 19:33:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:51.180 19:33:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:39:51.180 19:33:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:39:51.180 19:33:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:51.180 19:33:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:51.180 19:33:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:51.438 "name": "raid_bdev1", 00:39:51.438 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:51.438 "strip_size_kb": 0, 00:39:51.438 "state": "online", 00:39:51.438 "raid_level": "raid1", 00:39:51.438 "superblock": true, 00:39:51.438 "num_base_bdevs": 4, 00:39:51.438 "num_base_bdevs_discovered": 3, 00:39:51.438 "num_base_bdevs_operational": 3, 00:39:51.438 "base_bdevs_list": [ 00:39:51.438 { 00:39:51.438 "name": "spare", 00:39:51.438 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:51.438 "is_configured": true, 00:39:51.438 "data_offset": 2048, 00:39:51.438 "data_size": 63488 00:39:51.438 }, 00:39:51.438 { 00:39:51.438 "name": null, 00:39:51.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:51.438 "is_configured": false, 00:39:51.438 "data_offset": 2048, 00:39:51.438 "data_size": 63488 00:39:51.438 }, 00:39:51.438 { 00:39:51.438 "name": "BaseBdev3", 00:39:51.438 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:51.438 "is_configured": true, 00:39:51.438 "data_offset": 2048, 00:39:51.438 "data_size": 63488 00:39:51.438 }, 00:39:51.438 { 00:39:51.438 "name": "BaseBdev4", 00:39:51.438 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:51.438 "is_configured": true, 00:39:51.438 "data_offset": 2048, 00:39:51.438 "data_size": 63488 00:39:51.438 } 00:39:51.438 ] 00:39:51.438 }' 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@660 -- # break 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:51.438 19:33:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:52.003 "name": "raid_bdev1", 00:39:52.003 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:52.003 "strip_size_kb": 0, 00:39:52.003 "state": "online", 00:39:52.003 "raid_level": "raid1", 00:39:52.003 "superblock": true, 00:39:52.003 "num_base_bdevs": 4, 00:39:52.003 "num_base_bdevs_discovered": 3, 00:39:52.003 "num_base_bdevs_operational": 3, 00:39:52.003 "base_bdevs_list": [ 00:39:52.003 { 00:39:52.003 "name": "spare", 00:39:52.003 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:52.003 "is_configured": true, 00:39:52.003 "data_offset": 2048, 00:39:52.003 "data_size": 63488 00:39:52.003 }, 00:39:52.003 { 00:39:52.003 "name": null, 00:39:52.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:52.003 "is_configured": false, 00:39:52.003 "data_offset": 2048, 00:39:52.003 "data_size": 63488 00:39:52.003 }, 00:39:52.003 { 00:39:52.003 "name": "BaseBdev3", 00:39:52.003 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:52.003 "is_configured": true, 00:39:52.003 "data_offset": 2048, 00:39:52.003 "data_size": 63488 00:39:52.003 }, 00:39:52.003 { 00:39:52.003 "name": "BaseBdev4", 00:39:52.003 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:52.003 "is_configured": true, 00:39:52.003 "data_offset": 2048, 00:39:52.003 "data_size": 63488 00:39:52.003 } 00:39:52.003 ] 00:39:52.003 }' 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:52.003 19:33:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.261 19:33:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:52.261 "name": "raid_bdev1", 00:39:52.261 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:52.261 "strip_size_kb": 0, 00:39:52.261 "state": "online", 00:39:52.261 "raid_level": "raid1", 00:39:52.261 "superblock": true, 00:39:52.261 "num_base_bdevs": 4, 00:39:52.261 "num_base_bdevs_discovered": 3, 00:39:52.261 "num_base_bdevs_operational": 3, 00:39:52.261 "base_bdevs_list": [ 00:39:52.261 { 00:39:52.261 "name": "spare", 00:39:52.261 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:52.261 "is_configured": true, 00:39:52.261 "data_offset": 2048, 00:39:52.261 "data_size": 63488 00:39:52.261 }, 00:39:52.261 { 00:39:52.261 "name": null, 00:39:52.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:52.261 "is_configured": false, 00:39:52.261 "data_offset": 2048, 00:39:52.261 "data_size": 63488 00:39:52.261 }, 00:39:52.261 { 00:39:52.261 "name": "BaseBdev3", 00:39:52.261 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:52.261 "is_configured": true, 00:39:52.261 "data_offset": 2048, 00:39:52.261 "data_size": 63488 00:39:52.261 }, 00:39:52.261 { 00:39:52.261 "name": "BaseBdev4", 00:39:52.261 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:52.261 "is_configured": true, 00:39:52.261 "data_offset": 2048, 00:39:52.261 "data_size": 63488 00:39:52.261 } 00:39:52.261 ] 00:39:52.261 }' 00:39:52.261 19:33:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:52.261 19:33:08 -- common/autotest_common.sh@10 -- # set +x 00:39:52.827 19:33:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:53.086 [2024-04-18 19:33:09.004961] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:53.086 [2024-04-18 19:33:09.005009] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:53.344 00:39:53.344 Latency(us) 00:39:53.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.344 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:39:53.344 raid_bdev1 : 12.54 89.29 267.88 0.00 0.00 16429.68 569.54 126827.76 00:39:53.344 =================================================================================================================== 00:39:53.344 Total : 89.29 267.88 0.00 0.00 16429.68 569.54 126827.76 00:39:53.344 [2024-04-18 19:33:09.113393] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:53.344 [2024-04-18 19:33:09.113462] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:53.344 0 00:39:53.344 [2024-04-18 19:33:09.113570] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:53.344 [2024-04-18 19:33:09.113582] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:39:53.344 19:33:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:53.344 19:33:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:39:53.602 19:33:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:39:53.602 19:33:09 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:39:53.602 19:33:09 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:39:53.602 19:33:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:53.602 19:33:09 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:39:53.602 19:33:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:53.602 19:33:09 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:39:53.602 19:33:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:53.602 19:33:09 -- bdev/nbd_common.sh@12 -- # local i 00:39:53.602 19:33:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:53.602 19:33:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:53.602 19:33:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:39:53.859 /dev/nbd0 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:53.859 19:33:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:39:53.859 19:33:09 -- common/autotest_common.sh@855 -- # local i 00:39:53.859 19:33:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:53.859 19:33:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:53.859 19:33:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:39:53.859 19:33:09 -- common/autotest_common.sh@859 -- # break 00:39:53.859 19:33:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:53.859 19:33:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:53.859 19:33:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:53.859 1+0 records in 00:39:53.859 1+0 records out 00:39:53.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600097 s, 6.8 MB/s 00:39:53.859 19:33:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:53.859 19:33:09 -- common/autotest_common.sh@872 -- # size=4096 00:39:53.859 19:33:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:53.859 19:33:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:53.859 19:33:09 -- common/autotest_common.sh@875 -- # return 0 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:53.859 19:33:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:39:53.859 19:33:09 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:39:53.859 19:33:09 -- bdev/bdev_raid.sh@678 -- # continue 00:39:53.859 19:33:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:39:53.859 19:33:09 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:39:53.859 19:33:09 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@12 -- # local i 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:53.859 19:33:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:39:54.116 /dev/nbd1 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:54.374 19:33:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:39:54.374 19:33:10 -- common/autotest_common.sh@855 -- # local i 00:39:54.374 19:33:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:54.374 19:33:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:54.374 19:33:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:39:54.374 19:33:10 -- common/autotest_common.sh@859 -- # break 00:39:54.374 19:33:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:54.374 19:33:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:54.374 19:33:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:54.374 1+0 records in 00:39:54.374 1+0 records out 00:39:54.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266329 s, 15.4 MB/s 00:39:54.374 19:33:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:54.374 19:33:10 -- common/autotest_common.sh@872 -- # size=4096 00:39:54.374 19:33:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:54.374 19:33:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:54.374 19:33:10 -- common/autotest_common.sh@875 -- # return 0 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:54.374 19:33:10 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:54.374 19:33:10 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@51 -- # local i 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:54.374 19:33:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@41 -- # break 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@45 -- # return 0 00:39:54.940 19:33:10 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:39:54.940 19:33:10 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:39:54.940 19:33:10 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@12 -- # local i 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:54.940 19:33:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:39:55.198 /dev/nbd1 00:39:55.198 19:33:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:55.198 19:33:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:55.198 19:33:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:39:55.198 19:33:11 -- common/autotest_common.sh@855 -- # local i 00:39:55.198 19:33:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:55.198 19:33:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:55.198 19:33:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:39:55.198 19:33:11 -- common/autotest_common.sh@859 -- # break 00:39:55.198 19:33:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:55.198 19:33:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:55.198 19:33:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:55.198 1+0 records in 00:39:55.198 1+0 records out 00:39:55.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028152 s, 14.5 MB/s 00:39:55.198 19:33:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:55.198 19:33:11 -- common/autotest_common.sh@872 -- # size=4096 00:39:55.198 19:33:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:55.198 19:33:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:55.198 19:33:11 -- common/autotest_common.sh@875 -- # return 0 00:39:55.198 19:33:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:55.198 19:33:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:55.198 19:33:11 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:55.533 19:33:11 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:39:55.533 19:33:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:55.533 19:33:11 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:39:55.533 19:33:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:55.533 19:33:11 -- bdev/nbd_common.sh@51 -- # local i 00:39:55.533 19:33:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:55.533 19:33:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@41 -- # break 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@45 -- # return 0 00:39:55.792 19:33:11 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@51 -- # local i 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:55.792 19:33:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:39:56.051 19:33:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:56.051 19:33:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:56.051 19:33:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:56.051 19:33:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:56.051 19:33:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:56.051 19:33:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:56.051 19:33:11 -- bdev/nbd_common.sh@41 -- # break 00:39:56.051 19:33:11 -- bdev/nbd_common.sh@45 -- # return 0 00:39:56.051 19:33:11 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:39:56.051 19:33:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:39:56.051 19:33:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:39:56.051 19:33:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:39:56.309 19:33:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:56.567 [2024-04-18 19:33:12.276420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:56.567 [2024-04-18 19:33:12.276522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:56.567 [2024-04-18 19:33:12.276564] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:39:56.567 [2024-04-18 19:33:12.276586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:56.567 [2024-04-18 19:33:12.279209] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:56.567 [2024-04-18 19:33:12.279284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:56.567 [2024-04-18 19:33:12.279434] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:56.567 [2024-04-18 19:33:12.279511] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:56.567 BaseBdev1 00:39:56.567 19:33:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:39:56.567 19:33:12 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:39:56.567 19:33:12 -- bdev/bdev_raid.sh@696 -- # continue 00:39:56.567 19:33:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:39:56.567 19:33:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:39:56.567 19:33:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:39:56.827 19:33:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:39:57.086 [2024-04-18 19:33:12.846320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:39:57.086 [2024-04-18 19:33:12.846428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:57.086 [2024-04-18 19:33:12.846487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:39:57.086 [2024-04-18 19:33:12.846529] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:57.086 [2024-04-18 19:33:12.847095] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:57.086 [2024-04-18 19:33:12.847165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:57.086 [2024-04-18 19:33:12.847327] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:39:57.086 [2024-04-18 19:33:12.847343] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:39:57.087 [2024-04-18 19:33:12.847353] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:57.087 [2024-04-18 19:33:12.847425] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state configuring 00:39:57.087 [2024-04-18 19:33:12.847529] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:57.087 BaseBdev3 00:39:57.087 19:33:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:39:57.087 19:33:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:39:57.087 19:33:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:39:57.346 19:33:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:39:57.605 [2024-04-18 19:33:13.462440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:39:57.605 [2024-04-18 19:33:13.462545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:57.605 [2024-04-18 19:33:13.462580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:39:57.605 [2024-04-18 19:33:13.462608] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:57.605 [2024-04-18 19:33:13.463135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:57.605 [2024-04-18 19:33:13.463195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:39:57.605 [2024-04-18 19:33:13.463314] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:39:57.605 [2024-04-18 19:33:13.463347] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:57.605 BaseBdev4 00:39:57.605 19:33:13 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:57.863 19:33:13 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:58.121 [2024-04-18 19:33:13.954653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:58.121 [2024-04-18 19:33:13.954755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:58.121 [2024-04-18 19:33:13.954790] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:39:58.121 [2024-04-18 19:33:13.954817] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:58.121 [2024-04-18 19:33:13.955396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:58.121 [2024-04-18 19:33:13.955466] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:58.121 [2024-04-18 19:33:13.955600] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:39:58.121 [2024-04-18 19:33:13.955635] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:58.121 spare 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:58.121 19:33:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:58.380 [2024-04-18 19:33:14.055755] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c680 00:39:58.380 [2024-04-18 19:33:14.055802] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:58.380 [2024-04-18 19:33:14.055986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000039860 00:39:58.380 [2024-04-18 19:33:14.056389] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c680 00:39:58.380 [2024-04-18 19:33:14.056408] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c680 00:39:58.380 [2024-04-18 19:33:14.056569] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:58.380 19:33:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:58.380 "name": "raid_bdev1", 00:39:58.380 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:58.380 "strip_size_kb": 0, 00:39:58.380 "state": "online", 00:39:58.380 "raid_level": "raid1", 00:39:58.380 "superblock": true, 00:39:58.380 "num_base_bdevs": 4, 00:39:58.380 "num_base_bdevs_discovered": 3, 00:39:58.380 "num_base_bdevs_operational": 3, 00:39:58.380 "base_bdevs_list": [ 00:39:58.380 { 00:39:58.380 "name": "spare", 00:39:58.380 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:58.380 "is_configured": true, 00:39:58.380 "data_offset": 2048, 00:39:58.380 "data_size": 63488 00:39:58.380 }, 00:39:58.380 { 00:39:58.380 "name": null, 00:39:58.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:58.380 "is_configured": false, 00:39:58.380 "data_offset": 2048, 00:39:58.380 "data_size": 63488 00:39:58.380 }, 00:39:58.380 { 00:39:58.380 "name": "BaseBdev3", 00:39:58.380 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:58.380 "is_configured": true, 00:39:58.380 "data_offset": 2048, 00:39:58.380 "data_size": 63488 00:39:58.380 }, 00:39:58.380 { 00:39:58.380 "name": "BaseBdev4", 00:39:58.380 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:58.380 "is_configured": true, 00:39:58.380 "data_offset": 2048, 00:39:58.380 "data_size": 63488 00:39:58.380 } 00:39:58.380 ] 00:39:58.380 }' 00:39:58.380 19:33:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:58.380 19:33:14 -- common/autotest_common.sh@10 -- # set +x 00:39:59.313 19:33:14 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:59.313 19:33:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:39:59.313 19:33:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:39:59.313 19:33:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:39:59.313 19:33:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:39:59.313 19:33:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:59.313 19:33:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:59.313 19:33:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:39:59.313 "name": "raid_bdev1", 00:39:59.313 "uuid": "7ab864af-6d5b-444a-aad1-205c2232273e", 00:39:59.313 "strip_size_kb": 0, 00:39:59.313 "state": "online", 00:39:59.313 "raid_level": "raid1", 00:39:59.313 "superblock": true, 00:39:59.313 "num_base_bdevs": 4, 00:39:59.313 "num_base_bdevs_discovered": 3, 00:39:59.313 "num_base_bdevs_operational": 3, 00:39:59.313 "base_bdevs_list": [ 00:39:59.313 { 00:39:59.313 "name": "spare", 00:39:59.313 "uuid": "607190e6-f100-598a-a0b2-ae2c5f486984", 00:39:59.313 "is_configured": true, 00:39:59.313 "data_offset": 2048, 00:39:59.313 "data_size": 63488 00:39:59.313 }, 00:39:59.313 { 00:39:59.313 "name": null, 00:39:59.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:59.313 "is_configured": false, 00:39:59.313 "data_offset": 2048, 00:39:59.313 "data_size": 63488 00:39:59.313 }, 00:39:59.313 { 00:39:59.313 "name": "BaseBdev3", 00:39:59.313 "uuid": "d3c0ea36-6443-5de2-89e3-5206eed5b4ff", 00:39:59.313 "is_configured": true, 00:39:59.313 "data_offset": 2048, 00:39:59.313 "data_size": 63488 00:39:59.313 }, 00:39:59.313 { 00:39:59.313 "name": "BaseBdev4", 00:39:59.313 "uuid": "7e008659-e995-5987-9a0b-eade921d5a72", 00:39:59.313 "is_configured": true, 00:39:59.313 "data_offset": 2048, 00:39:59.313 "data_size": 63488 00:39:59.313 } 00:39:59.313 ] 00:39:59.313 }' 00:39:59.313 19:33:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:39:59.313 19:33:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:59.313 19:33:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:39:59.570 19:33:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:39:59.570 19:33:15 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:59.570 19:33:15 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:59.570 19:33:15 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:39:59.570 19:33:15 -- bdev/bdev_raid.sh@709 -- # killprocess 136939 00:39:59.570 19:33:15 -- common/autotest_common.sh@936 -- # '[' -z 136939 ']' 00:39:59.570 19:33:15 -- common/autotest_common.sh@940 -- # kill -0 136939 00:39:59.570 19:33:15 -- common/autotest_common.sh@941 -- # uname 00:39:59.828 19:33:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:39:59.828 19:33:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136939 00:39:59.828 19:33:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:39:59.828 killing process with pid 136939 00:39:59.828 Received shutdown signal, test time was about 18.973703 seconds 00:39:59.828 00:39:59.828 Latency(us) 00:39:59.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.828 =================================================================================================================== 00:39:59.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:59.828 19:33:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:39:59.828 19:33:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136939' 00:39:59.828 19:33:15 -- common/autotest_common.sh@955 -- # kill 136939 00:39:59.828 19:33:15 -- common/autotest_common.sh@960 -- # wait 136939 00:39:59.828 [2024-04-18 19:33:15.515669] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:59.828 [2024-04-18 19:33:15.515781] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:59.828 [2024-04-18 19:33:15.515870] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:59.829 [2024-04-18 19:33:15.515890] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c680 name raid_bdev1, state offline 00:40:00.398 [2024-04-18 19:33:16.018146] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@711 -- # return 0 00:40:01.808 00:40:01.808 real 0m27.208s 00:40:01.808 user 0m43.707s 00:40:01.808 sys 0m3.516s 00:40:01.808 19:33:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:01.808 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:40:01.808 ************************************ 00:40:01.808 END TEST raid_rebuild_test_sb_io 00:40:01.808 ************************************ 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:40:01.808 19:33:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:40:01.808 19:33:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:01.808 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:40:01.808 ************************************ 00:40:01.808 START TEST raid5f_state_function_test 00:40:01.808 ************************************ 00:40:01.808 19:33:17 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 3 false 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=137647 00:40:01.808 Process raid pid: 137647 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137647' 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:40:01.808 19:33:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137647 /var/tmp/spdk-raid.sock 00:40:01.808 19:33:17 -- common/autotest_common.sh@817 -- # '[' -z 137647 ']' 00:40:01.808 19:33:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:01.808 19:33:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:40:01.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:01.808 19:33:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:01.808 19:33:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:40:01.808 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:40:02.066 [2024-04-18 19:33:17.791996] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:40:02.066 [2024-04-18 19:33:17.792162] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:02.066 [2024-04-18 19:33:17.954280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.324 [2024-04-18 19:33:18.170813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.583 [2024-04-18 19:33:18.390056] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:02.841 19:33:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:40:02.841 19:33:18 -- common/autotest_common.sh@850 -- # return 0 00:40:02.841 19:33:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:40:03.100 [2024-04-18 19:33:18.953759] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:03.100 [2024-04-18 19:33:18.953851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:03.100 [2024-04-18 19:33:18.953865] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:03.100 [2024-04-18 19:33:18.953884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:03.100 [2024-04-18 19:33:18.953892] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:03.100 [2024-04-18 19:33:18.953933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:03.100 19:33:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:03.101 19:33:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:03.101 19:33:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:03.358 19:33:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:03.358 "name": "Existed_Raid", 00:40:03.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:03.358 "strip_size_kb": 64, 00:40:03.358 "state": "configuring", 00:40:03.358 "raid_level": "raid5f", 00:40:03.358 "superblock": false, 00:40:03.358 "num_base_bdevs": 3, 00:40:03.358 "num_base_bdevs_discovered": 0, 00:40:03.358 "num_base_bdevs_operational": 3, 00:40:03.358 "base_bdevs_list": [ 00:40:03.358 { 00:40:03.358 "name": "BaseBdev1", 00:40:03.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:03.358 "is_configured": false, 00:40:03.358 "data_offset": 0, 00:40:03.358 "data_size": 0 00:40:03.358 }, 00:40:03.358 { 00:40:03.358 "name": "BaseBdev2", 00:40:03.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:03.358 "is_configured": false, 00:40:03.358 "data_offset": 0, 00:40:03.358 "data_size": 0 00:40:03.358 }, 00:40:03.358 { 00:40:03.358 "name": "BaseBdev3", 00:40:03.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:03.358 "is_configured": false, 00:40:03.358 "data_offset": 0, 00:40:03.358 "data_size": 0 00:40:03.358 } 00:40:03.358 ] 00:40:03.358 }' 00:40:03.615 19:33:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:03.615 19:33:19 -- common/autotest_common.sh@10 -- # set +x 00:40:04.181 19:33:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:40:04.438 [2024-04-18 19:33:20.121875] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:04.438 [2024-04-18 19:33:20.121921] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:40:04.438 19:33:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:40:04.696 [2024-04-18 19:33:20.409941] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:04.696 [2024-04-18 19:33:20.410022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:04.696 [2024-04-18 19:33:20.410035] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:04.696 [2024-04-18 19:33:20.410063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:04.696 [2024-04-18 19:33:20.410071] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:04.696 [2024-04-18 19:33:20.410100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:04.696 19:33:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:40:04.955 [2024-04-18 19:33:20.724555] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:04.955 BaseBdev1 00:40:04.955 19:33:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:40:04.955 19:33:20 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:40:04.955 19:33:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:40:04.955 19:33:20 -- common/autotest_common.sh@887 -- # local i 00:40:04.955 19:33:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:40:04.955 19:33:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:40:04.955 19:33:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:05.213 19:33:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:05.476 [ 00:40:05.476 { 00:40:05.476 "name": "BaseBdev1", 00:40:05.476 "aliases": [ 00:40:05.476 "6ca8a9ba-c596-4686-94db-be1559ee3c00" 00:40:05.476 ], 00:40:05.476 "product_name": "Malloc disk", 00:40:05.476 "block_size": 512, 00:40:05.476 "num_blocks": 65536, 00:40:05.476 "uuid": "6ca8a9ba-c596-4686-94db-be1559ee3c00", 00:40:05.476 "assigned_rate_limits": { 00:40:05.476 "rw_ios_per_sec": 0, 00:40:05.476 "rw_mbytes_per_sec": 0, 00:40:05.476 "r_mbytes_per_sec": 0, 00:40:05.476 "w_mbytes_per_sec": 0 00:40:05.476 }, 00:40:05.476 "claimed": true, 00:40:05.476 "claim_type": "exclusive_write", 00:40:05.476 "zoned": false, 00:40:05.476 "supported_io_types": { 00:40:05.476 "read": true, 00:40:05.476 "write": true, 00:40:05.476 "unmap": true, 00:40:05.476 "write_zeroes": true, 00:40:05.476 "flush": true, 00:40:05.476 "reset": true, 00:40:05.476 "compare": false, 00:40:05.476 "compare_and_write": false, 00:40:05.476 "abort": true, 00:40:05.476 "nvme_admin": false, 00:40:05.476 "nvme_io": false 00:40:05.476 }, 00:40:05.476 "memory_domains": [ 00:40:05.476 { 00:40:05.476 "dma_device_id": "system", 00:40:05.476 "dma_device_type": 1 00:40:05.476 }, 00:40:05.476 { 00:40:05.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:05.476 "dma_device_type": 2 00:40:05.476 } 00:40:05.476 ], 00:40:05.476 "driver_specific": {} 00:40:05.476 } 00:40:05.476 ] 00:40:05.476 19:33:21 -- common/autotest_common.sh@893 -- # return 0 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:05.476 19:33:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:05.739 19:33:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:05.739 "name": "Existed_Raid", 00:40:05.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:05.739 "strip_size_kb": 64, 00:40:05.739 "state": "configuring", 00:40:05.739 "raid_level": "raid5f", 00:40:05.739 "superblock": false, 00:40:05.739 "num_base_bdevs": 3, 00:40:05.739 "num_base_bdevs_discovered": 1, 00:40:05.739 "num_base_bdevs_operational": 3, 00:40:05.739 "base_bdevs_list": [ 00:40:05.739 { 00:40:05.739 "name": "BaseBdev1", 00:40:05.739 "uuid": "6ca8a9ba-c596-4686-94db-be1559ee3c00", 00:40:05.739 "is_configured": true, 00:40:05.739 "data_offset": 0, 00:40:05.739 "data_size": 65536 00:40:05.739 }, 00:40:05.739 { 00:40:05.739 "name": "BaseBdev2", 00:40:05.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:05.739 "is_configured": false, 00:40:05.739 "data_offset": 0, 00:40:05.739 "data_size": 0 00:40:05.739 }, 00:40:05.739 { 00:40:05.739 "name": "BaseBdev3", 00:40:05.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:05.739 "is_configured": false, 00:40:05.739 "data_offset": 0, 00:40:05.739 "data_size": 0 00:40:05.739 } 00:40:05.739 ] 00:40:05.739 }' 00:40:05.739 19:33:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:05.739 19:33:21 -- common/autotest_common.sh@10 -- # set +x 00:40:06.305 19:33:22 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:40:06.563 [2024-04-18 19:33:22.417015] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:06.563 [2024-04-18 19:33:22.417082] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:40:06.563 19:33:22 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:40:06.563 19:33:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:40:06.821 [2024-04-18 19:33:22.685100] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:06.821 [2024-04-18 19:33:22.687293] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:06.821 [2024-04-18 19:33:22.687378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:06.821 [2024-04-18 19:33:22.687390] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:06.821 [2024-04-18 19:33:22.687418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:06.821 19:33:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:07.079 19:33:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:07.079 "name": "Existed_Raid", 00:40:07.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:07.079 "strip_size_kb": 64, 00:40:07.079 "state": "configuring", 00:40:07.079 "raid_level": "raid5f", 00:40:07.079 "superblock": false, 00:40:07.079 "num_base_bdevs": 3, 00:40:07.079 "num_base_bdevs_discovered": 1, 00:40:07.079 "num_base_bdevs_operational": 3, 00:40:07.079 "base_bdevs_list": [ 00:40:07.079 { 00:40:07.079 "name": "BaseBdev1", 00:40:07.079 "uuid": "6ca8a9ba-c596-4686-94db-be1559ee3c00", 00:40:07.079 "is_configured": true, 00:40:07.079 "data_offset": 0, 00:40:07.079 "data_size": 65536 00:40:07.079 }, 00:40:07.079 { 00:40:07.079 "name": "BaseBdev2", 00:40:07.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:07.079 "is_configured": false, 00:40:07.079 "data_offset": 0, 00:40:07.079 "data_size": 0 00:40:07.079 }, 00:40:07.079 { 00:40:07.079 "name": "BaseBdev3", 00:40:07.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:07.079 "is_configured": false, 00:40:07.079 "data_offset": 0, 00:40:07.079 "data_size": 0 00:40:07.079 } 00:40:07.079 ] 00:40:07.079 }' 00:40:07.079 19:33:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:07.079 19:33:22 -- common/autotest_common.sh@10 -- # set +x 00:40:08.013 19:33:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:40:08.013 [2024-04-18 19:33:23.898852] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:08.013 BaseBdev2 00:40:08.013 19:33:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:40:08.013 19:33:23 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:40:08.013 19:33:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:40:08.013 19:33:23 -- common/autotest_common.sh@887 -- # local i 00:40:08.013 19:33:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:40:08.013 19:33:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:40:08.013 19:33:23 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:08.272 19:33:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:08.530 [ 00:40:08.530 { 00:40:08.530 "name": "BaseBdev2", 00:40:08.530 "aliases": [ 00:40:08.530 "9de9b995-6a56-4825-aeab-bd4594c7a52b" 00:40:08.530 ], 00:40:08.530 "product_name": "Malloc disk", 00:40:08.530 "block_size": 512, 00:40:08.530 "num_blocks": 65536, 00:40:08.530 "uuid": "9de9b995-6a56-4825-aeab-bd4594c7a52b", 00:40:08.530 "assigned_rate_limits": { 00:40:08.530 "rw_ios_per_sec": 0, 00:40:08.530 "rw_mbytes_per_sec": 0, 00:40:08.530 "r_mbytes_per_sec": 0, 00:40:08.530 "w_mbytes_per_sec": 0 00:40:08.530 }, 00:40:08.530 "claimed": true, 00:40:08.530 "claim_type": "exclusive_write", 00:40:08.530 "zoned": false, 00:40:08.530 "supported_io_types": { 00:40:08.530 "read": true, 00:40:08.530 "write": true, 00:40:08.530 "unmap": true, 00:40:08.530 "write_zeroes": true, 00:40:08.530 "flush": true, 00:40:08.530 "reset": true, 00:40:08.530 "compare": false, 00:40:08.530 "compare_and_write": false, 00:40:08.530 "abort": true, 00:40:08.530 "nvme_admin": false, 00:40:08.530 "nvme_io": false 00:40:08.530 }, 00:40:08.530 "memory_domains": [ 00:40:08.530 { 00:40:08.530 "dma_device_id": "system", 00:40:08.530 "dma_device_type": 1 00:40:08.530 }, 00:40:08.530 { 00:40:08.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:08.530 "dma_device_type": 2 00:40:08.530 } 00:40:08.530 ], 00:40:08.530 "driver_specific": {} 00:40:08.530 } 00:40:08.530 ] 00:40:08.788 19:33:24 -- common/autotest_common.sh@893 -- # return 0 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:08.788 "name": "Existed_Raid", 00:40:08.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:08.788 "strip_size_kb": 64, 00:40:08.788 "state": "configuring", 00:40:08.788 "raid_level": "raid5f", 00:40:08.788 "superblock": false, 00:40:08.788 "num_base_bdevs": 3, 00:40:08.788 "num_base_bdevs_discovered": 2, 00:40:08.788 "num_base_bdevs_operational": 3, 00:40:08.788 "base_bdevs_list": [ 00:40:08.788 { 00:40:08.788 "name": "BaseBdev1", 00:40:08.788 "uuid": "6ca8a9ba-c596-4686-94db-be1559ee3c00", 00:40:08.788 "is_configured": true, 00:40:08.788 "data_offset": 0, 00:40:08.788 "data_size": 65536 00:40:08.788 }, 00:40:08.788 { 00:40:08.788 "name": "BaseBdev2", 00:40:08.788 "uuid": "9de9b995-6a56-4825-aeab-bd4594c7a52b", 00:40:08.788 "is_configured": true, 00:40:08.788 "data_offset": 0, 00:40:08.788 "data_size": 65536 00:40:08.788 }, 00:40:08.788 { 00:40:08.788 "name": "BaseBdev3", 00:40:08.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:08.788 "is_configured": false, 00:40:08.788 "data_offset": 0, 00:40:08.788 "data_size": 0 00:40:08.788 } 00:40:08.788 ] 00:40:08.788 }' 00:40:08.788 19:33:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:08.788 19:33:24 -- common/autotest_common.sh@10 -- # set +x 00:40:09.719 19:33:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:40:09.719 [2024-04-18 19:33:25.597602] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:09.719 [2024-04-18 19:33:25.597688] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:40:09.719 [2024-04-18 19:33:25.597699] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:40:09.719 [2024-04-18 19:33:25.597857] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:40:09.719 [2024-04-18 19:33:25.604386] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:40:09.719 [2024-04-18 19:33:25.604418] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:40:09.719 [2024-04-18 19:33:25.604729] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:09.719 BaseBdev3 00:40:09.719 19:33:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:40:09.719 19:33:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:40:09.719 19:33:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:40:09.719 19:33:25 -- common/autotest_common.sh@887 -- # local i 00:40:09.719 19:33:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:40:09.719 19:33:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:40:09.719 19:33:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:09.977 19:33:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:10.235 [ 00:40:10.235 { 00:40:10.235 "name": "BaseBdev3", 00:40:10.235 "aliases": [ 00:40:10.235 "23f5f40c-0ae6-456c-9874-e2a3ec7a455c" 00:40:10.235 ], 00:40:10.235 "product_name": "Malloc disk", 00:40:10.235 "block_size": 512, 00:40:10.235 "num_blocks": 65536, 00:40:10.235 "uuid": "23f5f40c-0ae6-456c-9874-e2a3ec7a455c", 00:40:10.235 "assigned_rate_limits": { 00:40:10.235 "rw_ios_per_sec": 0, 00:40:10.235 "rw_mbytes_per_sec": 0, 00:40:10.235 "r_mbytes_per_sec": 0, 00:40:10.235 "w_mbytes_per_sec": 0 00:40:10.235 }, 00:40:10.235 "claimed": true, 00:40:10.235 "claim_type": "exclusive_write", 00:40:10.235 "zoned": false, 00:40:10.235 "supported_io_types": { 00:40:10.235 "read": true, 00:40:10.235 "write": true, 00:40:10.235 "unmap": true, 00:40:10.235 "write_zeroes": true, 00:40:10.235 "flush": true, 00:40:10.235 "reset": true, 00:40:10.235 "compare": false, 00:40:10.235 "compare_and_write": false, 00:40:10.235 "abort": true, 00:40:10.235 "nvme_admin": false, 00:40:10.235 "nvme_io": false 00:40:10.235 }, 00:40:10.235 "memory_domains": [ 00:40:10.235 { 00:40:10.235 "dma_device_id": "system", 00:40:10.235 "dma_device_type": 1 00:40:10.235 }, 00:40:10.235 { 00:40:10.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:10.235 "dma_device_type": 2 00:40:10.235 } 00:40:10.235 ], 00:40:10.235 "driver_specific": {} 00:40:10.235 } 00:40:10.235 ] 00:40:10.235 19:33:26 -- common/autotest_common.sh@893 -- # return 0 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:10.235 19:33:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:10.493 19:33:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:10.493 "name": "Existed_Raid", 00:40:10.493 "uuid": "0881f93e-2fa6-486f-834b-e6a4b12d57c1", 00:40:10.493 "strip_size_kb": 64, 00:40:10.493 "state": "online", 00:40:10.493 "raid_level": "raid5f", 00:40:10.493 "superblock": false, 00:40:10.493 "num_base_bdevs": 3, 00:40:10.493 "num_base_bdevs_discovered": 3, 00:40:10.493 "num_base_bdevs_operational": 3, 00:40:10.493 "base_bdevs_list": [ 00:40:10.493 { 00:40:10.493 "name": "BaseBdev1", 00:40:10.493 "uuid": "6ca8a9ba-c596-4686-94db-be1559ee3c00", 00:40:10.493 "is_configured": true, 00:40:10.493 "data_offset": 0, 00:40:10.493 "data_size": 65536 00:40:10.493 }, 00:40:10.493 { 00:40:10.493 "name": "BaseBdev2", 00:40:10.493 "uuid": "9de9b995-6a56-4825-aeab-bd4594c7a52b", 00:40:10.493 "is_configured": true, 00:40:10.493 "data_offset": 0, 00:40:10.493 "data_size": 65536 00:40:10.493 }, 00:40:10.493 { 00:40:10.493 "name": "BaseBdev3", 00:40:10.493 "uuid": "23f5f40c-0ae6-456c-9874-e2a3ec7a455c", 00:40:10.493 "is_configured": true, 00:40:10.493 "data_offset": 0, 00:40:10.493 "data_size": 65536 00:40:10.493 } 00:40:10.493 ] 00:40:10.493 }' 00:40:10.493 19:33:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:10.493 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:40:11.429 19:33:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:40:11.429 [2024-04-18 19:33:27.243964] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@196 -- # return 0 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:11.751 "name": "Existed_Raid", 00:40:11.751 "uuid": "0881f93e-2fa6-486f-834b-e6a4b12d57c1", 00:40:11.751 "strip_size_kb": 64, 00:40:11.751 "state": "online", 00:40:11.751 "raid_level": "raid5f", 00:40:11.751 "superblock": false, 00:40:11.751 "num_base_bdevs": 3, 00:40:11.751 "num_base_bdevs_discovered": 2, 00:40:11.751 "num_base_bdevs_operational": 2, 00:40:11.751 "base_bdevs_list": [ 00:40:11.751 { 00:40:11.751 "name": null, 00:40:11.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:11.751 "is_configured": false, 00:40:11.751 "data_offset": 0, 00:40:11.751 "data_size": 65536 00:40:11.751 }, 00:40:11.751 { 00:40:11.751 "name": "BaseBdev2", 00:40:11.751 "uuid": "9de9b995-6a56-4825-aeab-bd4594c7a52b", 00:40:11.751 "is_configured": true, 00:40:11.751 "data_offset": 0, 00:40:11.751 "data_size": 65536 00:40:11.751 }, 00:40:11.751 { 00:40:11.751 "name": "BaseBdev3", 00:40:11.751 "uuid": "23f5f40c-0ae6-456c-9874-e2a3ec7a455c", 00:40:11.751 "is_configured": true, 00:40:11.751 "data_offset": 0, 00:40:11.751 "data_size": 65536 00:40:11.751 } 00:40:11.751 ] 00:40:11.751 }' 00:40:11.751 19:33:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:11.751 19:33:27 -- common/autotest_common.sh@10 -- # set +x 00:40:12.685 19:33:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:40:12.685 19:33:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:40:12.685 19:33:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:40:12.685 19:33:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:12.685 19:33:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:40:12.685 19:33:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:12.685 19:33:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:40:12.943 [2024-04-18 19:33:28.743390] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:12.943 [2024-04-18 19:33:28.743495] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:12.943 [2024-04-18 19:33:28.850160] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:12.943 19:33:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:40:12.943 19:33:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:40:12.943 19:33:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:12.943 19:33:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:40:13.510 19:33:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:40:13.510 19:33:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:13.510 19:33:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:40:13.510 [2024-04-18 19:33:29.378430] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:13.510 [2024-04-18 19:33:29.378510] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:40:13.767 19:33:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:40:13.767 19:33:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:40:13.767 19:33:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:13.767 19:33:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:40:14.025 19:33:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:40:14.025 19:33:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:40:14.025 19:33:29 -- bdev/bdev_raid.sh@287 -- # killprocess 137647 00:40:14.025 19:33:29 -- common/autotest_common.sh@936 -- # '[' -z 137647 ']' 00:40:14.025 19:33:29 -- common/autotest_common.sh@940 -- # kill -0 137647 00:40:14.025 19:33:29 -- common/autotest_common.sh@941 -- # uname 00:40:14.025 19:33:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:40:14.025 19:33:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137647 00:40:14.025 19:33:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:40:14.025 killing process with pid 137647 00:40:14.025 19:33:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:40:14.025 19:33:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137647' 00:40:14.025 19:33:29 -- common/autotest_common.sh@955 -- # kill 137647 00:40:14.025 19:33:29 -- common/autotest_common.sh@960 -- # wait 137647 00:40:14.025 [2024-04-18 19:33:29.775788] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:14.026 [2024-04-18 19:33:29.775929] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:40:15.401 ************************************ 00:40:15.401 END TEST raid5f_state_function_test 00:40:15.401 ************************************ 00:40:15.401 00:40:15.401 real 0m13.459s 00:40:15.401 user 0m23.416s 00:40:15.401 sys 0m1.629s 00:40:15.401 19:33:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:15.401 19:33:31 -- common/autotest_common.sh@10 -- # set +x 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:40:15.401 19:33:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:40:15.401 19:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:15.401 19:33:31 -- common/autotest_common.sh@10 -- # set +x 00:40:15.401 ************************************ 00:40:15.401 START TEST raid5f_state_function_test_sb 00:40:15.401 ************************************ 00:40:15.401 19:33:31 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 3 true 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=138056 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 138056' 00:40:15.401 Process raid pid: 138056 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 138056 /var/tmp/spdk-raid.sock 00:40:15.401 19:33:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:40:15.401 19:33:31 -- common/autotest_common.sh@817 -- # '[' -z 138056 ']' 00:40:15.401 19:33:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:15.401 19:33:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:40:15.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:15.401 19:33:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:15.401 19:33:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:40:15.401 19:33:31 -- common/autotest_common.sh@10 -- # set +x 00:40:15.660 [2024-04-18 19:33:31.350668] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:40:15.660 [2024-04-18 19:33:31.350877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:15.660 [2024-04-18 19:33:31.529525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.918 [2024-04-18 19:33:31.750089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.177 [2024-04-18 19:33:31.952098] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:16.435 19:33:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:40:16.435 19:33:32 -- common/autotest_common.sh@850 -- # return 0 00:40:16.435 19:33:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:40:17.098 [2024-04-18 19:33:32.657148] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:17.098 [2024-04-18 19:33:32.657251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:17.098 [2024-04-18 19:33:32.657263] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:17.098 [2024-04-18 19:33:32.657281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:17.098 [2024-04-18 19:33:32.657288] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:17.098 [2024-04-18 19:33:32.657336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:17.098 19:33:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:17.098 19:33:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:17.098 19:33:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:17.098 19:33:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:17.099 "name": "Existed_Raid", 00:40:17.099 "uuid": "3838ef89-fac6-4675-849c-8e66011ea78d", 00:40:17.099 "strip_size_kb": 64, 00:40:17.099 "state": "configuring", 00:40:17.099 "raid_level": "raid5f", 00:40:17.099 "superblock": true, 00:40:17.099 "num_base_bdevs": 3, 00:40:17.099 "num_base_bdevs_discovered": 0, 00:40:17.099 "num_base_bdevs_operational": 3, 00:40:17.099 "base_bdevs_list": [ 00:40:17.099 { 00:40:17.099 "name": "BaseBdev1", 00:40:17.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:17.099 "is_configured": false, 00:40:17.099 "data_offset": 0, 00:40:17.099 "data_size": 0 00:40:17.099 }, 00:40:17.099 { 00:40:17.099 "name": "BaseBdev2", 00:40:17.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:17.099 "is_configured": false, 00:40:17.099 "data_offset": 0, 00:40:17.099 "data_size": 0 00:40:17.099 }, 00:40:17.099 { 00:40:17.099 "name": "BaseBdev3", 00:40:17.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:17.099 "is_configured": false, 00:40:17.099 "data_offset": 0, 00:40:17.099 "data_size": 0 00:40:17.099 } 00:40:17.099 ] 00:40:17.099 }' 00:40:17.099 19:33:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:17.099 19:33:32 -- common/autotest_common.sh@10 -- # set +x 00:40:18.035 19:33:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:40:18.035 [2024-04-18 19:33:33.901242] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:18.035 [2024-04-18 19:33:33.901291] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:40:18.035 19:33:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:40:18.295 [2024-04-18 19:33:34.173333] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:18.295 [2024-04-18 19:33:34.173402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:18.295 [2024-04-18 19:33:34.173413] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:18.295 [2024-04-18 19:33:34.173440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:18.295 [2024-04-18 19:33:34.173448] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:18.295 [2024-04-18 19:33:34.173475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:18.295 19:33:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:40:18.553 [2024-04-18 19:33:34.410816] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:18.553 BaseBdev1 00:40:18.553 19:33:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:40:18.553 19:33:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:40:18.553 19:33:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:40:18.553 19:33:34 -- common/autotest_common.sh@887 -- # local i 00:40:18.553 19:33:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:40:18.553 19:33:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:40:18.553 19:33:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:18.811 19:33:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:19.070 [ 00:40:19.070 { 00:40:19.070 "name": "BaseBdev1", 00:40:19.070 "aliases": [ 00:40:19.070 "9d8414bd-96ad-4f62-b7ca-237dce55bc65" 00:40:19.070 ], 00:40:19.070 "product_name": "Malloc disk", 00:40:19.070 "block_size": 512, 00:40:19.070 "num_blocks": 65536, 00:40:19.070 "uuid": "9d8414bd-96ad-4f62-b7ca-237dce55bc65", 00:40:19.070 "assigned_rate_limits": { 00:40:19.070 "rw_ios_per_sec": 0, 00:40:19.070 "rw_mbytes_per_sec": 0, 00:40:19.070 "r_mbytes_per_sec": 0, 00:40:19.070 "w_mbytes_per_sec": 0 00:40:19.070 }, 00:40:19.070 "claimed": true, 00:40:19.070 "claim_type": "exclusive_write", 00:40:19.070 "zoned": false, 00:40:19.070 "supported_io_types": { 00:40:19.070 "read": true, 00:40:19.070 "write": true, 00:40:19.070 "unmap": true, 00:40:19.070 "write_zeroes": true, 00:40:19.070 "flush": true, 00:40:19.070 "reset": true, 00:40:19.070 "compare": false, 00:40:19.070 "compare_and_write": false, 00:40:19.070 "abort": true, 00:40:19.070 "nvme_admin": false, 00:40:19.070 "nvme_io": false 00:40:19.070 }, 00:40:19.070 "memory_domains": [ 00:40:19.070 { 00:40:19.070 "dma_device_id": "system", 00:40:19.070 "dma_device_type": 1 00:40:19.070 }, 00:40:19.070 { 00:40:19.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:19.070 "dma_device_type": 2 00:40:19.070 } 00:40:19.070 ], 00:40:19.070 "driver_specific": {} 00:40:19.070 } 00:40:19.070 ] 00:40:19.070 19:33:34 -- common/autotest_common.sh@893 -- # return 0 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:19.070 19:33:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:19.328 19:33:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:19.328 "name": "Existed_Raid", 00:40:19.328 "uuid": "6a95065f-ea9d-4b66-a0fe-4425913d49bf", 00:40:19.328 "strip_size_kb": 64, 00:40:19.328 "state": "configuring", 00:40:19.328 "raid_level": "raid5f", 00:40:19.328 "superblock": true, 00:40:19.328 "num_base_bdevs": 3, 00:40:19.328 "num_base_bdevs_discovered": 1, 00:40:19.328 "num_base_bdevs_operational": 3, 00:40:19.328 "base_bdevs_list": [ 00:40:19.328 { 00:40:19.328 "name": "BaseBdev1", 00:40:19.328 "uuid": "9d8414bd-96ad-4f62-b7ca-237dce55bc65", 00:40:19.328 "is_configured": true, 00:40:19.328 "data_offset": 2048, 00:40:19.328 "data_size": 63488 00:40:19.328 }, 00:40:19.328 { 00:40:19.328 "name": "BaseBdev2", 00:40:19.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:19.328 "is_configured": false, 00:40:19.328 "data_offset": 0, 00:40:19.328 "data_size": 0 00:40:19.328 }, 00:40:19.328 { 00:40:19.328 "name": "BaseBdev3", 00:40:19.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:19.328 "is_configured": false, 00:40:19.328 "data_offset": 0, 00:40:19.328 "data_size": 0 00:40:19.328 } 00:40:19.328 ] 00:40:19.328 }' 00:40:19.329 19:33:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:19.329 19:33:35 -- common/autotest_common.sh@10 -- # set +x 00:40:20.262 19:33:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:40:20.520 [2024-04-18 19:33:36.227307] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:20.521 [2024-04-18 19:33:36.227413] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:40:20.521 19:33:36 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:40:20.521 19:33:36 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:40:20.780 19:33:36 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:40:21.037 BaseBdev1 00:40:21.037 19:33:36 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:40:21.037 19:33:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:40:21.037 19:33:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:40:21.037 19:33:36 -- common/autotest_common.sh@887 -- # local i 00:40:21.037 19:33:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:40:21.037 19:33:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:40:21.037 19:33:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:21.294 19:33:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:21.557 [ 00:40:21.557 { 00:40:21.557 "name": "BaseBdev1", 00:40:21.557 "aliases": [ 00:40:21.557 "4c6838e6-2a51-4684-b608-052d8b6aadbc" 00:40:21.557 ], 00:40:21.557 "product_name": "Malloc disk", 00:40:21.558 "block_size": 512, 00:40:21.558 "num_blocks": 65536, 00:40:21.558 "uuid": "4c6838e6-2a51-4684-b608-052d8b6aadbc", 00:40:21.558 "assigned_rate_limits": { 00:40:21.558 "rw_ios_per_sec": 0, 00:40:21.558 "rw_mbytes_per_sec": 0, 00:40:21.558 "r_mbytes_per_sec": 0, 00:40:21.558 "w_mbytes_per_sec": 0 00:40:21.558 }, 00:40:21.558 "claimed": false, 00:40:21.558 "zoned": false, 00:40:21.558 "supported_io_types": { 00:40:21.558 "read": true, 00:40:21.558 "write": true, 00:40:21.558 "unmap": true, 00:40:21.558 "write_zeroes": true, 00:40:21.558 "flush": true, 00:40:21.558 "reset": true, 00:40:21.558 "compare": false, 00:40:21.558 "compare_and_write": false, 00:40:21.558 "abort": true, 00:40:21.558 "nvme_admin": false, 00:40:21.558 "nvme_io": false 00:40:21.558 }, 00:40:21.558 "memory_domains": [ 00:40:21.558 { 00:40:21.558 "dma_device_id": "system", 00:40:21.558 "dma_device_type": 1 00:40:21.558 }, 00:40:21.558 { 00:40:21.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:21.558 "dma_device_type": 2 00:40:21.558 } 00:40:21.558 ], 00:40:21.558 "driver_specific": {} 00:40:21.558 } 00:40:21.558 ] 00:40:21.558 19:33:37 -- common/autotest_common.sh@893 -- # return 0 00:40:21.558 19:33:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:40:21.815 [2024-04-18 19:33:37.613602] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:21.815 [2024-04-18 19:33:37.616209] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:21.815 [2024-04-18 19:33:37.616287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:21.815 [2024-04-18 19:33:37.616299] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:21.815 [2024-04-18 19:33:37.616328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:21.815 19:33:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:22.072 19:33:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:22.072 "name": "Existed_Raid", 00:40:22.072 "uuid": "61385379-da16-4d9f-a5fb-d96b9d89e714", 00:40:22.072 "strip_size_kb": 64, 00:40:22.072 "state": "configuring", 00:40:22.072 "raid_level": "raid5f", 00:40:22.072 "superblock": true, 00:40:22.072 "num_base_bdevs": 3, 00:40:22.072 "num_base_bdevs_discovered": 1, 00:40:22.072 "num_base_bdevs_operational": 3, 00:40:22.072 "base_bdevs_list": [ 00:40:22.072 { 00:40:22.072 "name": "BaseBdev1", 00:40:22.072 "uuid": "4c6838e6-2a51-4684-b608-052d8b6aadbc", 00:40:22.072 "is_configured": true, 00:40:22.072 "data_offset": 2048, 00:40:22.072 "data_size": 63488 00:40:22.072 }, 00:40:22.072 { 00:40:22.072 "name": "BaseBdev2", 00:40:22.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:22.072 "is_configured": false, 00:40:22.072 "data_offset": 0, 00:40:22.072 "data_size": 0 00:40:22.072 }, 00:40:22.072 { 00:40:22.072 "name": "BaseBdev3", 00:40:22.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:22.072 "is_configured": false, 00:40:22.072 "data_offset": 0, 00:40:22.072 "data_size": 0 00:40:22.072 } 00:40:22.072 ] 00:40:22.072 }' 00:40:22.072 19:33:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:22.072 19:33:37 -- common/autotest_common.sh@10 -- # set +x 00:40:23.004 19:33:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:40:23.004 [2024-04-18 19:33:38.835780] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:23.004 BaseBdev2 00:40:23.004 19:33:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:40:23.004 19:33:38 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:40:23.004 19:33:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:40:23.004 19:33:38 -- common/autotest_common.sh@887 -- # local i 00:40:23.004 19:33:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:40:23.004 19:33:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:40:23.004 19:33:38 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:23.262 19:33:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:23.520 [ 00:40:23.520 { 00:40:23.520 "name": "BaseBdev2", 00:40:23.520 "aliases": [ 00:40:23.520 "286af655-84cd-4959-aa1e-6a04b97cd7f8" 00:40:23.520 ], 00:40:23.520 "product_name": "Malloc disk", 00:40:23.520 "block_size": 512, 00:40:23.520 "num_blocks": 65536, 00:40:23.520 "uuid": "286af655-84cd-4959-aa1e-6a04b97cd7f8", 00:40:23.520 "assigned_rate_limits": { 00:40:23.520 "rw_ios_per_sec": 0, 00:40:23.520 "rw_mbytes_per_sec": 0, 00:40:23.520 "r_mbytes_per_sec": 0, 00:40:23.520 "w_mbytes_per_sec": 0 00:40:23.520 }, 00:40:23.520 "claimed": true, 00:40:23.520 "claim_type": "exclusive_write", 00:40:23.520 "zoned": false, 00:40:23.520 "supported_io_types": { 00:40:23.520 "read": true, 00:40:23.520 "write": true, 00:40:23.520 "unmap": true, 00:40:23.520 "write_zeroes": true, 00:40:23.520 "flush": true, 00:40:23.520 "reset": true, 00:40:23.520 "compare": false, 00:40:23.520 "compare_and_write": false, 00:40:23.520 "abort": true, 00:40:23.520 "nvme_admin": false, 00:40:23.520 "nvme_io": false 00:40:23.520 }, 00:40:23.520 "memory_domains": [ 00:40:23.520 { 00:40:23.520 "dma_device_id": "system", 00:40:23.520 "dma_device_type": 1 00:40:23.520 }, 00:40:23.520 { 00:40:23.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:23.520 "dma_device_type": 2 00:40:23.520 } 00:40:23.520 ], 00:40:23.520 "driver_specific": {} 00:40:23.520 } 00:40:23.520 ] 00:40:23.520 19:33:39 -- common/autotest_common.sh@893 -- # return 0 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:23.520 19:33:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:23.777 19:33:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:23.777 "name": "Existed_Raid", 00:40:23.777 "uuid": "61385379-da16-4d9f-a5fb-d96b9d89e714", 00:40:23.777 "strip_size_kb": 64, 00:40:23.777 "state": "configuring", 00:40:23.777 "raid_level": "raid5f", 00:40:23.777 "superblock": true, 00:40:23.777 "num_base_bdevs": 3, 00:40:23.777 "num_base_bdevs_discovered": 2, 00:40:23.777 "num_base_bdevs_operational": 3, 00:40:23.777 "base_bdevs_list": [ 00:40:23.777 { 00:40:23.777 "name": "BaseBdev1", 00:40:23.777 "uuid": "4c6838e6-2a51-4684-b608-052d8b6aadbc", 00:40:23.777 "is_configured": true, 00:40:23.777 "data_offset": 2048, 00:40:23.777 "data_size": 63488 00:40:23.777 }, 00:40:23.777 { 00:40:23.777 "name": "BaseBdev2", 00:40:23.777 "uuid": "286af655-84cd-4959-aa1e-6a04b97cd7f8", 00:40:23.777 "is_configured": true, 00:40:23.777 "data_offset": 2048, 00:40:23.777 "data_size": 63488 00:40:23.777 }, 00:40:23.777 { 00:40:23.777 "name": "BaseBdev3", 00:40:23.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:23.777 "is_configured": false, 00:40:23.777 "data_offset": 0, 00:40:23.777 "data_size": 0 00:40:23.777 } 00:40:23.777 ] 00:40:23.777 }' 00:40:23.777 19:33:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:23.777 19:33:39 -- common/autotest_common.sh@10 -- # set +x 00:40:24.713 19:33:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:40:24.971 [2024-04-18 19:33:40.700342] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:24.971 [2024-04-18 19:33:40.700686] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:40:24.971 [2024-04-18 19:33:40.700702] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:24.971 [2024-04-18 19:33:40.700864] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:40:24.971 BaseBdev3 00:40:24.971 [2024-04-18 19:33:40.707183] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:40:24.971 [2024-04-18 19:33:40.707211] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:40:24.971 [2024-04-18 19:33:40.707424] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:24.971 19:33:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:40:24.971 19:33:40 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:40:24.971 19:33:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:40:24.971 19:33:40 -- common/autotest_common.sh@887 -- # local i 00:40:24.971 19:33:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:40:24.971 19:33:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:40:24.971 19:33:40 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:25.229 19:33:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:25.487 [ 00:40:25.487 { 00:40:25.487 "name": "BaseBdev3", 00:40:25.487 "aliases": [ 00:40:25.487 "c9906fee-2abe-4358-911a-105539c034d7" 00:40:25.487 ], 00:40:25.487 "product_name": "Malloc disk", 00:40:25.487 "block_size": 512, 00:40:25.487 "num_blocks": 65536, 00:40:25.487 "uuid": "c9906fee-2abe-4358-911a-105539c034d7", 00:40:25.487 "assigned_rate_limits": { 00:40:25.487 "rw_ios_per_sec": 0, 00:40:25.487 "rw_mbytes_per_sec": 0, 00:40:25.487 "r_mbytes_per_sec": 0, 00:40:25.487 "w_mbytes_per_sec": 0 00:40:25.487 }, 00:40:25.487 "claimed": true, 00:40:25.487 "claim_type": "exclusive_write", 00:40:25.487 "zoned": false, 00:40:25.487 "supported_io_types": { 00:40:25.487 "read": true, 00:40:25.487 "write": true, 00:40:25.487 "unmap": true, 00:40:25.487 "write_zeroes": true, 00:40:25.487 "flush": true, 00:40:25.487 "reset": true, 00:40:25.487 "compare": false, 00:40:25.487 "compare_and_write": false, 00:40:25.487 "abort": true, 00:40:25.487 "nvme_admin": false, 00:40:25.487 "nvme_io": false 00:40:25.487 }, 00:40:25.487 "memory_domains": [ 00:40:25.487 { 00:40:25.487 "dma_device_id": "system", 00:40:25.487 "dma_device_type": 1 00:40:25.487 }, 00:40:25.487 { 00:40:25.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:25.487 "dma_device_type": 2 00:40:25.487 } 00:40:25.487 ], 00:40:25.487 "driver_specific": {} 00:40:25.487 } 00:40:25.487 ] 00:40:25.487 19:33:41 -- common/autotest_common.sh@893 -- # return 0 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:25.487 19:33:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:25.745 19:33:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:25.745 "name": "Existed_Raid", 00:40:25.745 "uuid": "61385379-da16-4d9f-a5fb-d96b9d89e714", 00:40:25.745 "strip_size_kb": 64, 00:40:25.745 "state": "online", 00:40:25.745 "raid_level": "raid5f", 00:40:25.745 "superblock": true, 00:40:25.745 "num_base_bdevs": 3, 00:40:25.745 "num_base_bdevs_discovered": 3, 00:40:25.745 "num_base_bdevs_operational": 3, 00:40:25.745 "base_bdevs_list": [ 00:40:25.745 { 00:40:25.745 "name": "BaseBdev1", 00:40:25.745 "uuid": "4c6838e6-2a51-4684-b608-052d8b6aadbc", 00:40:25.745 "is_configured": true, 00:40:25.745 "data_offset": 2048, 00:40:25.745 "data_size": 63488 00:40:25.745 }, 00:40:25.745 { 00:40:25.745 "name": "BaseBdev2", 00:40:25.745 "uuid": "286af655-84cd-4959-aa1e-6a04b97cd7f8", 00:40:25.745 "is_configured": true, 00:40:25.745 "data_offset": 2048, 00:40:25.745 "data_size": 63488 00:40:25.745 }, 00:40:25.745 { 00:40:25.745 "name": "BaseBdev3", 00:40:25.745 "uuid": "c9906fee-2abe-4358-911a-105539c034d7", 00:40:25.745 "is_configured": true, 00:40:25.745 "data_offset": 2048, 00:40:25.745 "data_size": 63488 00:40:25.745 } 00:40:25.745 ] 00:40:25.745 }' 00:40:25.745 19:33:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:25.745 19:33:41 -- common/autotest_common.sh@10 -- # set +x 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:40:26.681 [2024-04-18 19:33:42.467724] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@196 -- # return 0 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:26.681 19:33:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:26.682 19:33:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:26.682 19:33:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:26.682 19:33:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:26.682 19:33:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:26.682 19:33:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:26.682 19:33:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:26.682 19:33:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:26.682 19:33:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:26.940 19:33:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:26.940 "name": "Existed_Raid", 00:40:26.940 "uuid": "61385379-da16-4d9f-a5fb-d96b9d89e714", 00:40:26.940 "strip_size_kb": 64, 00:40:26.940 "state": "online", 00:40:26.940 "raid_level": "raid5f", 00:40:26.940 "superblock": true, 00:40:26.940 "num_base_bdevs": 3, 00:40:26.940 "num_base_bdevs_discovered": 2, 00:40:26.940 "num_base_bdevs_operational": 2, 00:40:26.940 "base_bdevs_list": [ 00:40:26.940 { 00:40:26.940 "name": null, 00:40:26.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:26.940 "is_configured": false, 00:40:26.940 "data_offset": 2048, 00:40:26.940 "data_size": 63488 00:40:26.940 }, 00:40:26.940 { 00:40:26.940 "name": "BaseBdev2", 00:40:26.940 "uuid": "286af655-84cd-4959-aa1e-6a04b97cd7f8", 00:40:26.940 "is_configured": true, 00:40:26.940 "data_offset": 2048, 00:40:26.940 "data_size": 63488 00:40:26.940 }, 00:40:26.940 { 00:40:26.940 "name": "BaseBdev3", 00:40:26.940 "uuid": "c9906fee-2abe-4358-911a-105539c034d7", 00:40:26.940 "is_configured": true, 00:40:26.940 "data_offset": 2048, 00:40:26.940 "data_size": 63488 00:40:26.940 } 00:40:26.940 ] 00:40:26.940 }' 00:40:26.940 19:33:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:26.940 19:33:42 -- common/autotest_common.sh@10 -- # set +x 00:40:27.875 19:33:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:40:27.875 19:33:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:40:27.875 19:33:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:27.875 19:33:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:40:28.134 19:33:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:40:28.134 19:33:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:28.134 19:33:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:40:28.134 [2024-04-18 19:33:44.011301] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:28.134 [2024-04-18 19:33:44.011582] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:28.392 [2024-04-18 19:33:44.138947] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:28.392 19:33:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:40:28.392 19:33:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:40:28.392 19:33:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:28.392 19:33:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:40:28.651 19:33:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:40:28.651 19:33:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:28.651 19:33:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:40:28.910 [2024-04-18 19:33:44.703390] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:28.910 [2024-04-18 19:33:44.703505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:40:29.169 19:33:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:40:29.169 19:33:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:40:29.169 19:33:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:29.169 19:33:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:40:29.427 19:33:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:40:29.427 19:33:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:40:29.427 19:33:45 -- bdev/bdev_raid.sh@287 -- # killprocess 138056 00:40:29.427 19:33:45 -- common/autotest_common.sh@936 -- # '[' -z 138056 ']' 00:40:29.427 19:33:45 -- common/autotest_common.sh@940 -- # kill -0 138056 00:40:29.427 19:33:45 -- common/autotest_common.sh@941 -- # uname 00:40:29.427 19:33:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:40:29.427 19:33:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138056 00:40:29.427 killing process with pid 138056 00:40:29.427 19:33:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:40:29.427 19:33:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:40:29.427 19:33:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138056' 00:40:29.427 19:33:45 -- common/autotest_common.sh@955 -- # kill 138056 00:40:29.427 19:33:45 -- common/autotest_common.sh@960 -- # wait 138056 00:40:29.427 [2024-04-18 19:33:45.164243] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:29.427 [2024-04-18 19:33:45.164419] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:30.835 19:33:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:40:30.835 00:40:30.835 real 0m15.447s 00:40:30.835 user 0m26.841s 00:40:30.835 sys 0m1.894s 00:40:30.835 ************************************ 00:40:30.835 END TEST raid5f_state_function_test_sb 00:40:30.835 ************************************ 00:40:30.835 19:33:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:30.835 19:33:46 -- common/autotest_common.sh@10 -- # set +x 00:40:31.093 19:33:46 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:40:31.094 19:33:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:40:31.094 19:33:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:31.094 19:33:46 -- common/autotest_common.sh@10 -- # set +x 00:40:31.094 ************************************ 00:40:31.094 START TEST raid5f_superblock_test 00:40:31.094 ************************************ 00:40:31.094 19:33:46 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid5f 3 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@357 -- # raid_pid=138513 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@358 -- # waitforlisten 138513 /var/tmp/spdk-raid.sock 00:40:31.094 19:33:46 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:40:31.094 19:33:46 -- common/autotest_common.sh@817 -- # '[' -z 138513 ']' 00:40:31.094 19:33:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:31.094 19:33:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:40:31.094 19:33:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:31.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:31.094 19:33:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:40:31.094 19:33:46 -- common/autotest_common.sh@10 -- # set +x 00:40:31.094 [2024-04-18 19:33:46.877131] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:40:31.094 [2024-04-18 19:33:46.877440] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138513 ] 00:40:31.353 [2024-04-18 19:33:47.045177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:31.612 [2024-04-18 19:33:47.309704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.612 [2024-04-18 19:33:47.517846] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:32.178 19:33:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:40:32.178 19:33:47 -- common/autotest_common.sh@850 -- # return 0 00:40:32.178 19:33:47 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:40:32.178 19:33:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:40:32.178 19:33:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:40:32.178 19:33:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:40:32.178 19:33:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:40:32.178 19:33:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:32.178 19:33:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:40:32.178 19:33:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:32.178 19:33:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:40:32.436 malloc1 00:40:32.436 19:33:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:32.694 [2024-04-18 19:33:48.455665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:32.694 [2024-04-18 19:33:48.455769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:32.694 [2024-04-18 19:33:48.455810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:40:32.694 [2024-04-18 19:33:48.455853] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:32.694 [2024-04-18 19:33:48.458359] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:32.694 [2024-04-18 19:33:48.458414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:32.694 pt1 00:40:32.694 19:33:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:40:32.694 19:33:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:40:32.694 19:33:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:40:32.694 19:33:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:40:32.694 19:33:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:40:32.694 19:33:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:32.694 19:33:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:40:32.694 19:33:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:32.694 19:33:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:40:32.953 malloc2 00:40:32.953 19:33:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:33.211 [2024-04-18 19:33:49.017193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:33.211 [2024-04-18 19:33:49.017294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:33.211 [2024-04-18 19:33:49.017338] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:40:33.211 [2024-04-18 19:33:49.017393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:33.211 [2024-04-18 19:33:49.019936] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:33.211 [2024-04-18 19:33:49.019990] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:33.211 pt2 00:40:33.211 19:33:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:40:33.211 19:33:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:40:33.211 19:33:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:40:33.211 19:33:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:40:33.211 19:33:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:40:33.211 19:33:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:33.211 19:33:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:40:33.211 19:33:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:33.211 19:33:49 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:40:33.469 malloc3 00:40:33.469 19:33:49 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:40:33.727 [2024-04-18 19:33:49.497639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:40:33.727 [2024-04-18 19:33:49.497736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:33.727 [2024-04-18 19:33:49.497776] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:40:33.727 [2024-04-18 19:33:49.497817] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:33.727 [2024-04-18 19:33:49.500292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:33.727 [2024-04-18 19:33:49.500348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:40:33.727 pt3 00:40:33.727 19:33:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:40:33.727 19:33:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:40:33.727 19:33:49 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:40:33.987 [2024-04-18 19:33:49.785706] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:33.987 [2024-04-18 19:33:49.787840] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:33.987 [2024-04-18 19:33:49.787907] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:40:33.987 [2024-04-18 19:33:49.788112] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:40:33.987 [2024-04-18 19:33:49.788129] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:33.987 [2024-04-18 19:33:49.788256] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:40:33.987 [2024-04-18 19:33:49.793730] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:40:33.987 [2024-04-18 19:33:49.793756] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:40:33.987 [2024-04-18 19:33:49.793959] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:33.987 19:33:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:34.245 19:33:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:34.245 "name": "raid_bdev1", 00:40:34.245 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:34.245 "strip_size_kb": 64, 00:40:34.245 "state": "online", 00:40:34.245 "raid_level": "raid5f", 00:40:34.245 "superblock": true, 00:40:34.245 "num_base_bdevs": 3, 00:40:34.245 "num_base_bdevs_discovered": 3, 00:40:34.245 "num_base_bdevs_operational": 3, 00:40:34.245 "base_bdevs_list": [ 00:40:34.245 { 00:40:34.245 "name": "pt1", 00:40:34.245 "uuid": "142a88d2-f717-584b-a74a-c004cdeb10b3", 00:40:34.245 "is_configured": true, 00:40:34.245 "data_offset": 2048, 00:40:34.245 "data_size": 63488 00:40:34.245 }, 00:40:34.245 { 00:40:34.245 "name": "pt2", 00:40:34.245 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:34.245 "is_configured": true, 00:40:34.245 "data_offset": 2048, 00:40:34.245 "data_size": 63488 00:40:34.245 }, 00:40:34.245 { 00:40:34.245 "name": "pt3", 00:40:34.245 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:34.245 "is_configured": true, 00:40:34.245 "data_offset": 2048, 00:40:34.245 "data_size": 63488 00:40:34.245 } 00:40:34.245 ] 00:40:34.245 }' 00:40:34.245 19:33:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:34.245 19:33:50 -- common/autotest_common.sh@10 -- # set +x 00:40:35.176 19:33:50 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:35.176 19:33:50 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:40:35.478 [2024-04-18 19:33:51.181246] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:35.478 19:33:51 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7e526e17-9fde-4043-92c0-ea651e3e686a 00:40:35.478 19:33:51 -- bdev/bdev_raid.sh@380 -- # '[' -z 7e526e17-9fde-4043-92c0-ea651e3e686a ']' 00:40:35.478 19:33:51 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:35.736 [2024-04-18 19:33:51.513142] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:35.736 [2024-04-18 19:33:51.513181] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:35.736 [2024-04-18 19:33:51.513260] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:35.736 [2024-04-18 19:33:51.513349] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:35.736 [2024-04-18 19:33:51.513361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:40:35.736 19:33:51 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:35.736 19:33:51 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:40:35.994 19:33:51 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:40:35.994 19:33:51 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:40:35.994 19:33:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:40:35.994 19:33:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:40:36.252 19:33:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:40:36.252 19:33:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:40:36.510 19:33:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:40:36.510 19:33:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:40:36.768 19:33:52 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:40:36.768 19:33:52 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:40:37.026 19:33:52 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:40:37.026 19:33:52 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:40:37.026 19:33:52 -- common/autotest_common.sh@638 -- # local es=0 00:40:37.026 19:33:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:40:37.026 19:33:52 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:37.026 19:33:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:37.026 19:33:52 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:37.026 19:33:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:37.026 19:33:52 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:37.026 19:33:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:40:37.026 19:33:52 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:37.026 19:33:52 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:37.026 19:33:52 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:40:37.284 [2024-04-18 19:33:53.013439] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:40:37.284 [2024-04-18 19:33:53.015609] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:40:37.284 [2024-04-18 19:33:53.015665] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:40:37.284 [2024-04-18 19:33:53.015724] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:40:37.284 [2024-04-18 19:33:53.015809] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:40:37.284 [2024-04-18 19:33:53.015841] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:40:37.284 [2024-04-18 19:33:53.015890] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:37.284 [2024-04-18 19:33:53.015902] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:40:37.284 request: 00:40:37.284 { 00:40:37.284 "name": "raid_bdev1", 00:40:37.284 "raid_level": "raid5f", 00:40:37.284 "base_bdevs": [ 00:40:37.284 "malloc1", 00:40:37.284 "malloc2", 00:40:37.284 "malloc3" 00:40:37.284 ], 00:40:37.284 "superblock": false, 00:40:37.284 "strip_size_kb": 64, 00:40:37.284 "method": "bdev_raid_create", 00:40:37.284 "req_id": 1 00:40:37.284 } 00:40:37.284 Got JSON-RPC error response 00:40:37.284 response: 00:40:37.284 { 00:40:37.284 "code": -17, 00:40:37.284 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:40:37.284 } 00:40:37.284 19:33:53 -- common/autotest_common.sh@641 -- # es=1 00:40:37.284 19:33:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:40:37.284 19:33:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:40:37.284 19:33:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:40:37.284 19:33:53 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:37.284 19:33:53 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:40:37.542 19:33:53 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:40:37.542 19:33:53 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:40:37.542 19:33:53 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:37.800 [2024-04-18 19:33:53.505489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:37.800 [2024-04-18 19:33:53.505586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:37.800 [2024-04-18 19:33:53.505626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:40:37.800 [2024-04-18 19:33:53.505647] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:37.800 [2024-04-18 19:33:53.508241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:37.800 [2024-04-18 19:33:53.508297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:37.800 [2024-04-18 19:33:53.508435] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:40:37.800 [2024-04-18 19:33:53.508502] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:37.800 pt1 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:37.800 19:33:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:38.060 19:33:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:38.060 "name": "raid_bdev1", 00:40:38.060 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:38.060 "strip_size_kb": 64, 00:40:38.060 "state": "configuring", 00:40:38.060 "raid_level": "raid5f", 00:40:38.060 "superblock": true, 00:40:38.060 "num_base_bdevs": 3, 00:40:38.060 "num_base_bdevs_discovered": 1, 00:40:38.060 "num_base_bdevs_operational": 3, 00:40:38.060 "base_bdevs_list": [ 00:40:38.060 { 00:40:38.060 "name": "pt1", 00:40:38.060 "uuid": "142a88d2-f717-584b-a74a-c004cdeb10b3", 00:40:38.060 "is_configured": true, 00:40:38.060 "data_offset": 2048, 00:40:38.060 "data_size": 63488 00:40:38.060 }, 00:40:38.060 { 00:40:38.060 "name": null, 00:40:38.060 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:38.060 "is_configured": false, 00:40:38.060 "data_offset": 2048, 00:40:38.060 "data_size": 63488 00:40:38.060 }, 00:40:38.060 { 00:40:38.060 "name": null, 00:40:38.060 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:38.060 "is_configured": false, 00:40:38.060 "data_offset": 2048, 00:40:38.060 "data_size": 63488 00:40:38.060 } 00:40:38.060 ] 00:40:38.060 }' 00:40:38.060 19:33:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:38.060 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:40:38.627 19:33:54 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:40:38.627 19:33:54 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:38.885 [2024-04-18 19:33:54.765836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:38.885 [2024-04-18 19:33:54.765949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:38.885 [2024-04-18 19:33:54.766005] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:40:38.885 [2024-04-18 19:33:54.766026] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:38.885 [2024-04-18 19:33:54.766549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:38.885 [2024-04-18 19:33:54.766591] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:38.885 [2024-04-18 19:33:54.766724] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:40:38.885 [2024-04-18 19:33:54.766751] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:38.885 pt2 00:40:38.885 19:33:54 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:40:39.143 [2024-04-18 19:33:54.997916] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:39.143 19:33:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:39.402 19:33:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:39.402 "name": "raid_bdev1", 00:40:39.402 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:39.402 "strip_size_kb": 64, 00:40:39.402 "state": "configuring", 00:40:39.402 "raid_level": "raid5f", 00:40:39.402 "superblock": true, 00:40:39.402 "num_base_bdevs": 3, 00:40:39.402 "num_base_bdevs_discovered": 1, 00:40:39.402 "num_base_bdevs_operational": 3, 00:40:39.402 "base_bdevs_list": [ 00:40:39.402 { 00:40:39.402 "name": "pt1", 00:40:39.402 "uuid": "142a88d2-f717-584b-a74a-c004cdeb10b3", 00:40:39.402 "is_configured": true, 00:40:39.402 "data_offset": 2048, 00:40:39.402 "data_size": 63488 00:40:39.402 }, 00:40:39.402 { 00:40:39.402 "name": null, 00:40:39.402 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:39.402 "is_configured": false, 00:40:39.402 "data_offset": 2048, 00:40:39.402 "data_size": 63488 00:40:39.402 }, 00:40:39.402 { 00:40:39.402 "name": null, 00:40:39.402 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:39.402 "is_configured": false, 00:40:39.402 "data_offset": 2048, 00:40:39.402 "data_size": 63488 00:40:39.402 } 00:40:39.402 ] 00:40:39.402 }' 00:40:39.402 19:33:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:39.402 19:33:55 -- common/autotest_common.sh@10 -- # set +x 00:40:40.336 19:33:56 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:40:40.336 19:33:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:40:40.336 19:33:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:40.336 [2024-04-18 19:33:56.254152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:40.336 [2024-04-18 19:33:56.254259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:40.336 [2024-04-18 19:33:56.254298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:40:40.336 [2024-04-18 19:33:56.254326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:40.336 [2024-04-18 19:33:56.254823] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:40.336 [2024-04-18 19:33:56.254869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:40.336 [2024-04-18 19:33:56.254993] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:40:40.336 [2024-04-18 19:33:56.255026] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:40.336 pt2 00:40:40.594 19:33:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:40:40.594 19:33:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:40:40.594 19:33:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:40:40.862 [2024-04-18 19:33:56.546236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:40:40.862 [2024-04-18 19:33:56.546330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:40.862 [2024-04-18 19:33:56.546366] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:40:40.862 [2024-04-18 19:33:56.546393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:40.862 [2024-04-18 19:33:56.546926] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:40.862 [2024-04-18 19:33:56.546973] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:40:40.862 [2024-04-18 19:33:56.547110] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:40:40.862 [2024-04-18 19:33:56.547146] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:40:40.862 [2024-04-18 19:33:56.547278] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:40:40.862 [2024-04-18 19:33:56.547290] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:40.862 [2024-04-18 19:33:56.547422] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:40:40.862 [2024-04-18 19:33:56.552840] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:40:40.862 [2024-04-18 19:33:56.552869] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:40:40.862 [2024-04-18 19:33:56.553057] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:40.862 pt3 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:40.862 19:33:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:41.135 19:33:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:41.135 "name": "raid_bdev1", 00:40:41.135 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:41.135 "strip_size_kb": 64, 00:40:41.135 "state": "online", 00:40:41.135 "raid_level": "raid5f", 00:40:41.135 "superblock": true, 00:40:41.135 "num_base_bdevs": 3, 00:40:41.135 "num_base_bdevs_discovered": 3, 00:40:41.135 "num_base_bdevs_operational": 3, 00:40:41.135 "base_bdevs_list": [ 00:40:41.135 { 00:40:41.135 "name": "pt1", 00:40:41.135 "uuid": "142a88d2-f717-584b-a74a-c004cdeb10b3", 00:40:41.135 "is_configured": true, 00:40:41.135 "data_offset": 2048, 00:40:41.135 "data_size": 63488 00:40:41.135 }, 00:40:41.135 { 00:40:41.135 "name": "pt2", 00:40:41.135 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:41.135 "is_configured": true, 00:40:41.135 "data_offset": 2048, 00:40:41.135 "data_size": 63488 00:40:41.135 }, 00:40:41.135 { 00:40:41.135 "name": "pt3", 00:40:41.135 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:41.135 "is_configured": true, 00:40:41.135 "data_offset": 2048, 00:40:41.135 "data_size": 63488 00:40:41.135 } 00:40:41.135 ] 00:40:41.135 }' 00:40:41.135 19:33:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:41.135 19:33:56 -- common/autotest_common.sh@10 -- # set +x 00:40:41.701 19:33:57 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:41.701 19:33:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:40:41.958 [2024-04-18 19:33:57.812492] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:41.958 19:33:57 -- bdev/bdev_raid.sh@430 -- # '[' 7e526e17-9fde-4043-92c0-ea651e3e686a '!=' 7e526e17-9fde-4043-92c0-ea651e3e686a ']' 00:40:41.958 19:33:57 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:40:41.958 19:33:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:40:41.958 19:33:57 -- bdev/bdev_raid.sh@196 -- # return 0 00:40:41.958 19:33:57 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:40:42.217 [2024-04-18 19:33:58.088398] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:42.217 19:33:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:42.476 19:33:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:42.476 "name": "raid_bdev1", 00:40:42.476 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:42.476 "strip_size_kb": 64, 00:40:42.476 "state": "online", 00:40:42.476 "raid_level": "raid5f", 00:40:42.476 "superblock": true, 00:40:42.476 "num_base_bdevs": 3, 00:40:42.476 "num_base_bdevs_discovered": 2, 00:40:42.476 "num_base_bdevs_operational": 2, 00:40:42.476 "base_bdevs_list": [ 00:40:42.476 { 00:40:42.476 "name": null, 00:40:42.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:42.476 "is_configured": false, 00:40:42.476 "data_offset": 2048, 00:40:42.476 "data_size": 63488 00:40:42.476 }, 00:40:42.476 { 00:40:42.476 "name": "pt2", 00:40:42.476 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:42.476 "is_configured": true, 00:40:42.476 "data_offset": 2048, 00:40:42.476 "data_size": 63488 00:40:42.476 }, 00:40:42.476 { 00:40:42.476 "name": "pt3", 00:40:42.476 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:42.476 "is_configured": true, 00:40:42.476 "data_offset": 2048, 00:40:42.476 "data_size": 63488 00:40:42.476 } 00:40:42.476 ] 00:40:42.476 }' 00:40:42.476 19:33:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:42.476 19:33:58 -- common/autotest_common.sh@10 -- # set +x 00:40:43.410 19:33:59 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:43.410 [2024-04-18 19:33:59.324649] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:43.410 [2024-04-18 19:33:59.324698] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:43.410 [2024-04-18 19:33:59.324771] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:43.410 [2024-04-18 19:33:59.324837] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:43.410 [2024-04-18 19:33:59.324850] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:40:43.668 19:33:59 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:40:43.668 19:33:59 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:43.926 19:33:59 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:40:43.926 19:33:59 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:40:43.926 19:33:59 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:40:43.926 19:33:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:40:43.926 19:33:59 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:40:44.184 19:33:59 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:40:44.184 19:33:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:40:44.184 19:33:59 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:40:44.184 19:34:00 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:40:44.184 19:34:00 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:40:44.184 19:34:00 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:40:44.184 19:34:00 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:40:44.184 19:34:00 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:44.444 [2024-04-18 19:34:00.296838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:44.444 [2024-04-18 19:34:00.296933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:44.444 [2024-04-18 19:34:00.296971] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:40:44.444 [2024-04-18 19:34:00.296997] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:44.444 [2024-04-18 19:34:00.299574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:44.444 [2024-04-18 19:34:00.299628] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:44.444 [2024-04-18 19:34:00.299753] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:40:44.444 [2024-04-18 19:34:00.299816] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:44.444 pt2 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:44.444 19:34:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:44.703 19:34:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:44.703 "name": "raid_bdev1", 00:40:44.703 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:44.703 "strip_size_kb": 64, 00:40:44.703 "state": "configuring", 00:40:44.703 "raid_level": "raid5f", 00:40:44.703 "superblock": true, 00:40:44.703 "num_base_bdevs": 3, 00:40:44.703 "num_base_bdevs_discovered": 1, 00:40:44.703 "num_base_bdevs_operational": 2, 00:40:44.703 "base_bdevs_list": [ 00:40:44.703 { 00:40:44.703 "name": null, 00:40:44.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:44.703 "is_configured": false, 00:40:44.703 "data_offset": 2048, 00:40:44.703 "data_size": 63488 00:40:44.703 }, 00:40:44.703 { 00:40:44.703 "name": "pt2", 00:40:44.703 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:44.703 "is_configured": true, 00:40:44.703 "data_offset": 2048, 00:40:44.703 "data_size": 63488 00:40:44.703 }, 00:40:44.703 { 00:40:44.703 "name": null, 00:40:44.703 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:44.703 "is_configured": false, 00:40:44.703 "data_offset": 2048, 00:40:44.703 "data_size": 63488 00:40:44.703 } 00:40:44.703 ] 00:40:44.703 }' 00:40:44.703 19:34:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:44.703 19:34:00 -- common/autotest_common.sh@10 -- # set +x 00:40:45.637 19:34:01 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:40:45.637 19:34:01 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:40:45.637 19:34:01 -- bdev/bdev_raid.sh@462 -- # i=2 00:40:45.637 19:34:01 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:40:45.895 [2024-04-18 19:34:01.685608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:40:45.895 [2024-04-18 19:34:01.685706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:45.895 [2024-04-18 19:34:01.685758] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:40:45.895 [2024-04-18 19:34:01.685784] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:45.895 [2024-04-18 19:34:01.686296] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:45.895 [2024-04-18 19:34:01.686335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:40:45.895 [2024-04-18 19:34:01.686475] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:40:45.895 [2024-04-18 19:34:01.686503] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:40:45.895 [2024-04-18 19:34:01.686612] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:40:45.895 [2024-04-18 19:34:01.686630] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:45.895 [2024-04-18 19:34:01.686717] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:45.895 [2024-04-18 19:34:01.692408] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:40:45.895 [2024-04-18 19:34:01.692438] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:40:45.895 [2024-04-18 19:34:01.692752] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:45.895 pt3 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:45.895 19:34:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:46.153 19:34:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:46.153 "name": "raid_bdev1", 00:40:46.153 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:46.153 "strip_size_kb": 64, 00:40:46.153 "state": "online", 00:40:46.153 "raid_level": "raid5f", 00:40:46.153 "superblock": true, 00:40:46.153 "num_base_bdevs": 3, 00:40:46.153 "num_base_bdevs_discovered": 2, 00:40:46.153 "num_base_bdevs_operational": 2, 00:40:46.153 "base_bdevs_list": [ 00:40:46.153 { 00:40:46.153 "name": null, 00:40:46.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:46.153 "is_configured": false, 00:40:46.153 "data_offset": 2048, 00:40:46.153 "data_size": 63488 00:40:46.153 }, 00:40:46.153 { 00:40:46.153 "name": "pt2", 00:40:46.153 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:46.153 "is_configured": true, 00:40:46.153 "data_offset": 2048, 00:40:46.153 "data_size": 63488 00:40:46.154 }, 00:40:46.154 { 00:40:46.154 "name": "pt3", 00:40:46.154 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:46.154 "is_configured": true, 00:40:46.154 "data_offset": 2048, 00:40:46.154 "data_size": 63488 00:40:46.154 } 00:40:46.154 ] 00:40:46.154 }' 00:40:46.154 19:34:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:46.154 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:40:47.101 19:34:02 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:40:47.101 19:34:02 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:47.359 [2024-04-18 19:34:03.029597] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:47.359 [2024-04-18 19:34:03.029670] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:47.359 [2024-04-18 19:34:03.029789] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:47.359 [2024-04-18 19:34:03.029883] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:47.359 [2024-04-18 19:34:03.029900] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:40:47.359 19:34:03 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:47.359 19:34:03 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:40:47.617 19:34:03 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:40:47.617 19:34:03 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:40:47.617 19:34:03 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:47.617 [2024-04-18 19:34:03.541708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:47.617 [2024-04-18 19:34:03.541853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:47.617 [2024-04-18 19:34:03.541903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:40:47.617 [2024-04-18 19:34:03.541929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:47.875 [2024-04-18 19:34:03.545225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:47.875 [2024-04-18 19:34:03.545324] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:47.875 [2024-04-18 19:34:03.545517] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:40:47.875 [2024-04-18 19:34:03.545608] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:47.875 pt1 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:47.875 19:34:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:48.133 19:34:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:48.133 "name": "raid_bdev1", 00:40:48.133 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:48.133 "strip_size_kb": 64, 00:40:48.133 "state": "configuring", 00:40:48.133 "raid_level": "raid5f", 00:40:48.133 "superblock": true, 00:40:48.133 "num_base_bdevs": 3, 00:40:48.133 "num_base_bdevs_discovered": 1, 00:40:48.133 "num_base_bdevs_operational": 3, 00:40:48.133 "base_bdevs_list": [ 00:40:48.133 { 00:40:48.133 "name": "pt1", 00:40:48.133 "uuid": "142a88d2-f717-584b-a74a-c004cdeb10b3", 00:40:48.133 "is_configured": true, 00:40:48.133 "data_offset": 2048, 00:40:48.133 "data_size": 63488 00:40:48.133 }, 00:40:48.133 { 00:40:48.133 "name": null, 00:40:48.133 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:48.133 "is_configured": false, 00:40:48.133 "data_offset": 2048, 00:40:48.133 "data_size": 63488 00:40:48.133 }, 00:40:48.133 { 00:40:48.133 "name": null, 00:40:48.133 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:48.133 "is_configured": false, 00:40:48.133 "data_offset": 2048, 00:40:48.133 "data_size": 63488 00:40:48.133 } 00:40:48.133 ] 00:40:48.133 }' 00:40:48.133 19:34:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:48.133 19:34:03 -- common/autotest_common.sh@10 -- # set +x 00:40:48.700 19:34:04 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:40:48.700 19:34:04 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:40:48.700 19:34:04 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:40:48.958 19:34:04 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:40:48.958 19:34:04 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:40:48.958 19:34:04 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:40:49.216 19:34:04 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:40:49.216 19:34:04 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:40:49.216 19:34:04 -- bdev/bdev_raid.sh@489 -- # i=2 00:40:49.216 19:34:04 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:40:49.216 [2024-04-18 19:34:05.106297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:40:49.216 [2024-04-18 19:34:05.106434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:49.216 [2024-04-18 19:34:05.106472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:40:49.216 [2024-04-18 19:34:05.106511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:49.216 [2024-04-18 19:34:05.107038] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:49.216 [2024-04-18 19:34:05.107084] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:40:49.216 [2024-04-18 19:34:05.107222] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:40:49.216 [2024-04-18 19:34:05.107235] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:40:49.216 [2024-04-18 19:34:05.107244] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:49.216 [2024-04-18 19:34:05.107264] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:40:49.216 [2024-04-18 19:34:05.107352] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:40:49.216 pt3 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:49.216 19:34:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:49.781 19:34:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:49.781 "name": "raid_bdev1", 00:40:49.781 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:49.781 "strip_size_kb": 64, 00:40:49.781 "state": "configuring", 00:40:49.781 "raid_level": "raid5f", 00:40:49.781 "superblock": true, 00:40:49.781 "num_base_bdevs": 3, 00:40:49.781 "num_base_bdevs_discovered": 1, 00:40:49.781 "num_base_bdevs_operational": 2, 00:40:49.781 "base_bdevs_list": [ 00:40:49.781 { 00:40:49.781 "name": null, 00:40:49.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:49.781 "is_configured": false, 00:40:49.781 "data_offset": 2048, 00:40:49.781 "data_size": 63488 00:40:49.781 }, 00:40:49.781 { 00:40:49.781 "name": null, 00:40:49.781 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:49.781 "is_configured": false, 00:40:49.781 "data_offset": 2048, 00:40:49.781 "data_size": 63488 00:40:49.781 }, 00:40:49.781 { 00:40:49.781 "name": "pt3", 00:40:49.781 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:49.781 "is_configured": true, 00:40:49.781 "data_offset": 2048, 00:40:49.781 "data_size": 63488 00:40:49.781 } 00:40:49.781 ] 00:40:49.781 }' 00:40:49.781 19:34:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:49.781 19:34:05 -- common/autotest_common.sh@10 -- # set +x 00:40:50.347 19:34:06 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:40:50.347 19:34:06 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:40:50.347 19:34:06 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:50.605 [2024-04-18 19:34:06.342602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:50.605 [2024-04-18 19:34:06.342719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:50.605 [2024-04-18 19:34:06.342774] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:40:50.605 [2024-04-18 19:34:06.342814] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:50.605 [2024-04-18 19:34:06.343561] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:50.605 [2024-04-18 19:34:06.343624] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:50.605 [2024-04-18 19:34:06.343800] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:40:50.605 [2024-04-18 19:34:06.343863] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:50.605 [2024-04-18 19:34:06.343995] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:40:50.605 [2024-04-18 19:34:06.344013] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:50.605 [2024-04-18 19:34:06.344118] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:40:50.605 [2024-04-18 19:34:06.349885] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:40:50.605 [2024-04-18 19:34:06.349920] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:40:50.605 [2024-04-18 19:34:06.350226] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:50.605 pt2 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:50.605 19:34:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:50.863 19:34:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:50.863 "name": "raid_bdev1", 00:40:50.863 "uuid": "7e526e17-9fde-4043-92c0-ea651e3e686a", 00:40:50.863 "strip_size_kb": 64, 00:40:50.863 "state": "online", 00:40:50.863 "raid_level": "raid5f", 00:40:50.863 "superblock": true, 00:40:50.863 "num_base_bdevs": 3, 00:40:50.863 "num_base_bdevs_discovered": 2, 00:40:50.863 "num_base_bdevs_operational": 2, 00:40:50.863 "base_bdevs_list": [ 00:40:50.863 { 00:40:50.863 "name": null, 00:40:50.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:50.863 "is_configured": false, 00:40:50.863 "data_offset": 2048, 00:40:50.863 "data_size": 63488 00:40:50.863 }, 00:40:50.863 { 00:40:50.863 "name": "pt2", 00:40:50.863 "uuid": "fa4d8f9c-f0c0-5f80-b64a-30f3a6788c65", 00:40:50.863 "is_configured": true, 00:40:50.863 "data_offset": 2048, 00:40:50.863 "data_size": 63488 00:40:50.863 }, 00:40:50.863 { 00:40:50.863 "name": "pt3", 00:40:50.863 "uuid": "9950d446-597a-570a-8a4c-7c4800290abd", 00:40:50.863 "is_configured": true, 00:40:50.863 "data_offset": 2048, 00:40:50.863 "data_size": 63488 00:40:50.863 } 00:40:50.863 ] 00:40:50.863 }' 00:40:50.863 19:34:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:50.863 19:34:06 -- common/autotest_common.sh@10 -- # set +x 00:40:51.861 19:34:07 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:51.861 19:34:07 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:40:51.861 [2024-04-18 19:34:07.687217] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:51.861 19:34:07 -- bdev/bdev_raid.sh@506 -- # '[' 7e526e17-9fde-4043-92c0-ea651e3e686a '!=' 7e526e17-9fde-4043-92c0-ea651e3e686a ']' 00:40:51.861 19:34:07 -- bdev/bdev_raid.sh@511 -- # killprocess 138513 00:40:51.861 19:34:07 -- common/autotest_common.sh@936 -- # '[' -z 138513 ']' 00:40:51.861 19:34:07 -- common/autotest_common.sh@940 -- # kill -0 138513 00:40:51.861 19:34:07 -- common/autotest_common.sh@941 -- # uname 00:40:51.861 19:34:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:40:51.861 19:34:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138513 00:40:51.861 killing process with pid 138513 00:40:51.862 19:34:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:40:51.862 19:34:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:40:51.862 19:34:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138513' 00:40:51.862 19:34:07 -- common/autotest_common.sh@955 -- # kill 138513 00:40:51.862 19:34:07 -- common/autotest_common.sh@960 -- # wait 138513 00:40:51.862 [2024-04-18 19:34:07.721917] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:51.862 [2024-04-18 19:34:07.721996] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:51.862 [2024-04-18 19:34:07.722058] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:51.862 [2024-04-18 19:34:07.722068] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:40:52.428 [2024-04-18 19:34:08.045191] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:53.800 ************************************ 00:40:53.800 END TEST raid5f_superblock_test 00:40:53.800 ************************************ 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@513 -- # return 0 00:40:53.800 00:40:53.800 real 0m22.670s 00:40:53.800 user 0m41.393s 00:40:53.800 sys 0m2.647s 00:40:53.800 19:34:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:53.800 19:34:09 -- common/autotest_common.sh@10 -- # set +x 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:40:53.800 19:34:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:40:53.800 19:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:53.800 19:34:09 -- common/autotest_common.sh@10 -- # set +x 00:40:53.800 ************************************ 00:40:53.800 START TEST raid5f_rebuild_test 00:40:53.800 ************************************ 00:40:53.800 19:34:09 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 3 false false 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@544 -- # raid_pid=139185 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139185 /var/tmp/spdk-raid.sock 00:40:53.800 19:34:09 -- common/autotest_common.sh@817 -- # '[' -z 139185 ']' 00:40:53.800 19:34:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:53.800 19:34:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:40:53.800 19:34:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:53.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:53.800 19:34:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:40:53.800 19:34:09 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:53.800 19:34:09 -- common/autotest_common.sh@10 -- # set +x 00:40:53.800 [2024-04-18 19:34:09.636151] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:40:53.800 [2024-04-18 19:34:09.636583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139185 ] 00:40:53.800 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:53.800 Zero copy mechanism will not be used. 00:40:54.058 [2024-04-18 19:34:09.807091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.316 [2024-04-18 19:34:10.075319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.574 [2024-04-18 19:34:10.342093] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:54.832 19:34:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:40:54.832 19:34:10 -- common/autotest_common.sh@850 -- # return 0 00:40:54.832 19:34:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:40:54.832 19:34:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:40:54.832 19:34:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:40:55.090 BaseBdev1 00:40:55.090 19:34:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:40:55.090 19:34:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:40:55.090 19:34:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:40:55.348 BaseBdev2 00:40:55.348 19:34:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:40:55.348 19:34:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:40:55.348 19:34:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:40:55.605 BaseBdev3 00:40:55.605 19:34:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:40:55.862 spare_malloc 00:40:56.119 19:34:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:56.119 spare_delay 00:40:56.119 19:34:12 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:56.470 [2024-04-18 19:34:12.254870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:56.470 [2024-04-18 19:34:12.254986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:56.470 [2024-04-18 19:34:12.255021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:40:56.470 [2024-04-18 19:34:12.255072] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:56.471 [2024-04-18 19:34:12.257625] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:56.471 [2024-04-18 19:34:12.257681] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:56.471 spare 00:40:56.471 19:34:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:40:56.729 [2024-04-18 19:34:12.538980] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:56.729 [2024-04-18 19:34:12.541124] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:56.729 [2024-04-18 19:34:12.541180] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:56.729 [2024-04-18 19:34:12.541252] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:40:56.729 [2024-04-18 19:34:12.541262] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:40:56.729 [2024-04-18 19:34:12.541426] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:40:56.729 [2024-04-18 19:34:12.549120] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:40:56.729 [2024-04-18 19:34:12.549149] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:40:56.729 [2024-04-18 19:34:12.549377] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:56.729 19:34:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:56.988 19:34:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:56.988 "name": "raid_bdev1", 00:40:56.988 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:40:56.988 "strip_size_kb": 64, 00:40:56.988 "state": "online", 00:40:56.988 "raid_level": "raid5f", 00:40:56.988 "superblock": false, 00:40:56.988 "num_base_bdevs": 3, 00:40:56.988 "num_base_bdevs_discovered": 3, 00:40:56.988 "num_base_bdevs_operational": 3, 00:40:56.988 "base_bdevs_list": [ 00:40:56.988 { 00:40:56.988 "name": "BaseBdev1", 00:40:56.988 "uuid": "c62b6372-5ba1-45d8-ade9-bf1225811197", 00:40:56.988 "is_configured": true, 00:40:56.988 "data_offset": 0, 00:40:56.988 "data_size": 65536 00:40:56.989 }, 00:40:56.989 { 00:40:56.989 "name": "BaseBdev2", 00:40:56.989 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:40:56.989 "is_configured": true, 00:40:56.989 "data_offset": 0, 00:40:56.989 "data_size": 65536 00:40:56.989 }, 00:40:56.989 { 00:40:56.989 "name": "BaseBdev3", 00:40:56.989 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:40:56.989 "is_configured": true, 00:40:56.989 "data_offset": 0, 00:40:56.989 "data_size": 65536 00:40:56.989 } 00:40:56.989 ] 00:40:56.989 }' 00:40:56.989 19:34:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:56.989 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:40:57.923 19:34:13 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:57.923 19:34:13 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:40:57.923 [2024-04-18 19:34:13.737355] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:57.923 19:34:13 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:40:57.923 19:34:13 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:57.923 19:34:13 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:58.181 19:34:14 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:40:58.181 19:34:14 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:40:58.181 19:34:14 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:40:58.181 19:34:14 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:40:58.181 19:34:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:58.181 19:34:14 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:40:58.181 19:34:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:58.181 19:34:14 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:40:58.181 19:34:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:58.181 19:34:14 -- bdev/nbd_common.sh@12 -- # local i 00:40:58.181 19:34:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:58.181 19:34:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:58.181 19:34:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:40:58.439 [2024-04-18 19:34:14.337357] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:40:58.698 /dev/nbd0 00:40:58.698 19:34:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:58.698 19:34:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:58.698 19:34:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:40:58.698 19:34:14 -- common/autotest_common.sh@855 -- # local i 00:40:58.698 19:34:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:40:58.698 19:34:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:40:58.698 19:34:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:40:58.698 19:34:14 -- common/autotest_common.sh@859 -- # break 00:40:58.698 19:34:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:40:58.698 19:34:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:40:58.698 19:34:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:58.698 1+0 records in 00:40:58.698 1+0 records out 00:40:58.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00235014 s, 1.7 MB/s 00:40:58.698 19:34:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:58.698 19:34:14 -- common/autotest_common.sh@872 -- # size=4096 00:40:58.698 19:34:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:58.698 19:34:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:40:58.698 19:34:14 -- common/autotest_common.sh@875 -- # return 0 00:40:58.698 19:34:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:58.698 19:34:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:58.698 19:34:14 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:40:58.698 19:34:14 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:40:58.698 19:34:14 -- bdev/bdev_raid.sh@582 -- # echo 128 00:40:58.698 19:34:14 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:40:58.973 512+0 records in 00:40:58.973 512+0 records out 00:40:58.973 67108864 bytes (67 MB, 64 MiB) copied, 0.440363 s, 152 MB/s 00:40:58.973 19:34:14 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:40:58.973 19:34:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:58.973 19:34:14 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:40:58.973 19:34:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:58.973 19:34:14 -- bdev/nbd_common.sh@51 -- # local i 00:40:58.973 19:34:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:58.973 19:34:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:59.231 19:34:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:40:59.491 [2024-04-18 19:34:15.162923] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@41 -- # break 00:40:59.491 19:34:15 -- bdev/nbd_common.sh@45 -- # return 0 00:40:59.491 19:34:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:40:59.871 [2024-04-18 19:34:15.470630] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:59.871 "name": "raid_bdev1", 00:40:59.871 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:40:59.871 "strip_size_kb": 64, 00:40:59.871 "state": "online", 00:40:59.871 "raid_level": "raid5f", 00:40:59.871 "superblock": false, 00:40:59.871 "num_base_bdevs": 3, 00:40:59.871 "num_base_bdevs_discovered": 2, 00:40:59.871 "num_base_bdevs_operational": 2, 00:40:59.871 "base_bdevs_list": [ 00:40:59.871 { 00:40:59.871 "name": null, 00:40:59.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:59.871 "is_configured": false, 00:40:59.871 "data_offset": 0, 00:40:59.871 "data_size": 65536 00:40:59.871 }, 00:40:59.871 { 00:40:59.871 "name": "BaseBdev2", 00:40:59.871 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:40:59.871 "is_configured": true, 00:40:59.871 "data_offset": 0, 00:40:59.871 "data_size": 65536 00:40:59.871 }, 00:40:59.871 { 00:40:59.871 "name": "BaseBdev3", 00:40:59.871 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:40:59.871 "is_configured": true, 00:40:59.871 "data_offset": 0, 00:40:59.871 "data_size": 65536 00:40:59.871 } 00:40:59.871 ] 00:40:59.871 }' 00:40:59.871 19:34:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:59.871 19:34:15 -- common/autotest_common.sh@10 -- # set +x 00:41:00.812 19:34:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:00.812 [2024-04-18 19:34:16.650821] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:41:00.812 [2024-04-18 19:34:16.650872] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:00.812 [2024-04-18 19:34:16.669333] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cfb0 00:41:00.812 [2024-04-18 19:34:16.677687] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:00.812 19:34:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:41:02.190 19:34:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:02.190 19:34:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:02.190 19:34:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:02.190 19:34:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:02.190 19:34:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:02.190 19:34:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:02.190 19:34:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:02.190 19:34:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:02.190 "name": "raid_bdev1", 00:41:02.190 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:02.190 "strip_size_kb": 64, 00:41:02.190 "state": "online", 00:41:02.190 "raid_level": "raid5f", 00:41:02.190 "superblock": false, 00:41:02.190 "num_base_bdevs": 3, 00:41:02.190 "num_base_bdevs_discovered": 3, 00:41:02.190 "num_base_bdevs_operational": 3, 00:41:02.190 "process": { 00:41:02.190 "type": "rebuild", 00:41:02.190 "target": "spare", 00:41:02.190 "progress": { 00:41:02.190 "blocks": 24576, 00:41:02.190 "percent": 18 00:41:02.190 } 00:41:02.190 }, 00:41:02.190 "base_bdevs_list": [ 00:41:02.190 { 00:41:02.190 "name": "spare", 00:41:02.190 "uuid": "dcee5ac4-0fb0-5198-9b27-a3a5fe508bf4", 00:41:02.190 "is_configured": true, 00:41:02.190 "data_offset": 0, 00:41:02.190 "data_size": 65536 00:41:02.190 }, 00:41:02.190 { 00:41:02.190 "name": "BaseBdev2", 00:41:02.190 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:02.190 "is_configured": true, 00:41:02.190 "data_offset": 0, 00:41:02.190 "data_size": 65536 00:41:02.190 }, 00:41:02.190 { 00:41:02.190 "name": "BaseBdev3", 00:41:02.190 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:02.190 "is_configured": true, 00:41:02.190 "data_offset": 0, 00:41:02.190 "data_size": 65536 00:41:02.190 } 00:41:02.190 ] 00:41:02.190 }' 00:41:02.190 19:34:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:02.190 19:34:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:02.190 19:34:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:02.190 19:34:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:02.190 19:34:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:41:02.449 [2024-04-18 19:34:18.339669] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:02.708 [2024-04-18 19:34:18.394370] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:02.708 [2024-04-18 19:34:18.394487] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:02.708 19:34:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:02.966 19:34:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:02.966 "name": "raid_bdev1", 00:41:02.966 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:02.966 "strip_size_kb": 64, 00:41:02.966 "state": "online", 00:41:02.966 "raid_level": "raid5f", 00:41:02.966 "superblock": false, 00:41:02.966 "num_base_bdevs": 3, 00:41:02.966 "num_base_bdevs_discovered": 2, 00:41:02.966 "num_base_bdevs_operational": 2, 00:41:02.966 "base_bdevs_list": [ 00:41:02.966 { 00:41:02.966 "name": null, 00:41:02.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:02.966 "is_configured": false, 00:41:02.966 "data_offset": 0, 00:41:02.966 "data_size": 65536 00:41:02.966 }, 00:41:02.966 { 00:41:02.966 "name": "BaseBdev2", 00:41:02.966 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:02.966 "is_configured": true, 00:41:02.966 "data_offset": 0, 00:41:02.966 "data_size": 65536 00:41:02.966 }, 00:41:02.966 { 00:41:02.966 "name": "BaseBdev3", 00:41:02.966 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:02.966 "is_configured": true, 00:41:02.966 "data_offset": 0, 00:41:02.966 "data_size": 65536 00:41:02.966 } 00:41:02.966 ] 00:41:02.966 }' 00:41:02.966 19:34:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:02.966 19:34:18 -- common/autotest_common.sh@10 -- # set +x 00:41:03.533 19:34:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:03.533 19:34:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:03.534 19:34:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:03.534 19:34:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:03.534 19:34:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:03.534 19:34:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:03.534 19:34:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:03.793 19:34:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:03.793 "name": "raid_bdev1", 00:41:03.793 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:03.793 "strip_size_kb": 64, 00:41:03.793 "state": "online", 00:41:03.793 "raid_level": "raid5f", 00:41:03.793 "superblock": false, 00:41:03.793 "num_base_bdevs": 3, 00:41:03.793 "num_base_bdevs_discovered": 2, 00:41:03.793 "num_base_bdevs_operational": 2, 00:41:03.793 "base_bdevs_list": [ 00:41:03.793 { 00:41:03.793 "name": null, 00:41:03.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:03.793 "is_configured": false, 00:41:03.793 "data_offset": 0, 00:41:03.793 "data_size": 65536 00:41:03.793 }, 00:41:03.793 { 00:41:03.793 "name": "BaseBdev2", 00:41:03.793 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:03.793 "is_configured": true, 00:41:03.793 "data_offset": 0, 00:41:03.793 "data_size": 65536 00:41:03.793 }, 00:41:03.793 { 00:41:03.793 "name": "BaseBdev3", 00:41:03.793 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:03.793 "is_configured": true, 00:41:03.793 "data_offset": 0, 00:41:03.793 "data_size": 65536 00:41:03.793 } 00:41:03.793 ] 00:41:03.793 }' 00:41:03.793 19:34:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:03.793 19:34:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:03.793 19:34:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:03.793 19:34:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:03.793 19:34:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:04.053 [2024-04-18 19:34:19.931436] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:41:04.053 [2024-04-18 19:34:19.931494] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:04.053 [2024-04-18 19:34:19.949289] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:41:04.053 [2024-04-18 19:34:19.958061] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:04.053 19:34:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:41:05.431 19:34:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:05.432 19:34:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:05.432 19:34:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:05.432 19:34:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:05.432 19:34:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:05.432 19:34:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:05.432 19:34:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:05.432 19:34:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:05.432 "name": "raid_bdev1", 00:41:05.432 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:05.432 "strip_size_kb": 64, 00:41:05.432 "state": "online", 00:41:05.432 "raid_level": "raid5f", 00:41:05.432 "superblock": false, 00:41:05.432 "num_base_bdevs": 3, 00:41:05.432 "num_base_bdevs_discovered": 3, 00:41:05.432 "num_base_bdevs_operational": 3, 00:41:05.432 "process": { 00:41:05.432 "type": "rebuild", 00:41:05.432 "target": "spare", 00:41:05.432 "progress": { 00:41:05.432 "blocks": 24576, 00:41:05.432 "percent": 18 00:41:05.432 } 00:41:05.432 }, 00:41:05.432 "base_bdevs_list": [ 00:41:05.432 { 00:41:05.432 "name": "spare", 00:41:05.432 "uuid": "dcee5ac4-0fb0-5198-9b27-a3a5fe508bf4", 00:41:05.432 "is_configured": true, 00:41:05.432 "data_offset": 0, 00:41:05.432 "data_size": 65536 00:41:05.432 }, 00:41:05.432 { 00:41:05.432 "name": "BaseBdev2", 00:41:05.432 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:05.432 "is_configured": true, 00:41:05.432 "data_offset": 0, 00:41:05.432 "data_size": 65536 00:41:05.432 }, 00:41:05.432 { 00:41:05.432 "name": "BaseBdev3", 00:41:05.432 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:05.432 "is_configured": true, 00:41:05.432 "data_offset": 0, 00:41:05.432 "data_size": 65536 00:41:05.432 } 00:41:05.432 ] 00:41:05.432 }' 00:41:05.432 19:34:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:05.432 19:34:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:05.432 19:34:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@657 -- # local timeout=706 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:05.691 19:34:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:05.950 19:34:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:05.950 "name": "raid_bdev1", 00:41:05.950 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:05.950 "strip_size_kb": 64, 00:41:05.950 "state": "online", 00:41:05.950 "raid_level": "raid5f", 00:41:05.950 "superblock": false, 00:41:05.950 "num_base_bdevs": 3, 00:41:05.950 "num_base_bdevs_discovered": 3, 00:41:05.950 "num_base_bdevs_operational": 3, 00:41:05.950 "process": { 00:41:05.950 "type": "rebuild", 00:41:05.950 "target": "spare", 00:41:05.950 "progress": { 00:41:05.950 "blocks": 34816, 00:41:05.950 "percent": 26 00:41:05.950 } 00:41:05.950 }, 00:41:05.950 "base_bdevs_list": [ 00:41:05.950 { 00:41:05.950 "name": "spare", 00:41:05.950 "uuid": "dcee5ac4-0fb0-5198-9b27-a3a5fe508bf4", 00:41:05.950 "is_configured": true, 00:41:05.950 "data_offset": 0, 00:41:05.950 "data_size": 65536 00:41:05.950 }, 00:41:05.950 { 00:41:05.950 "name": "BaseBdev2", 00:41:05.950 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:05.950 "is_configured": true, 00:41:05.950 "data_offset": 0, 00:41:05.950 "data_size": 65536 00:41:05.950 }, 00:41:05.950 { 00:41:05.950 "name": "BaseBdev3", 00:41:05.950 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:05.950 "is_configured": true, 00:41:05.950 "data_offset": 0, 00:41:05.950 "data_size": 65536 00:41:05.950 } 00:41:05.950 ] 00:41:05.950 }' 00:41:05.950 19:34:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:05.950 19:34:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:05.950 19:34:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:05.950 19:34:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:05.950 19:34:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:07.333 19:34:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:07.333 19:34:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:07.333 19:34:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:07.333 19:34:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:07.333 19:34:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:07.333 19:34:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:07.333 19:34:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:07.333 19:34:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:07.333 19:34:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:07.333 "name": "raid_bdev1", 00:41:07.333 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:07.333 "strip_size_kb": 64, 00:41:07.333 "state": "online", 00:41:07.333 "raid_level": "raid5f", 00:41:07.333 "superblock": false, 00:41:07.333 "num_base_bdevs": 3, 00:41:07.333 "num_base_bdevs_discovered": 3, 00:41:07.333 "num_base_bdevs_operational": 3, 00:41:07.333 "process": { 00:41:07.333 "type": "rebuild", 00:41:07.333 "target": "spare", 00:41:07.333 "progress": { 00:41:07.333 "blocks": 61440, 00:41:07.333 "percent": 46 00:41:07.333 } 00:41:07.333 }, 00:41:07.333 "base_bdevs_list": [ 00:41:07.333 { 00:41:07.333 "name": "spare", 00:41:07.333 "uuid": "dcee5ac4-0fb0-5198-9b27-a3a5fe508bf4", 00:41:07.333 "is_configured": true, 00:41:07.333 "data_offset": 0, 00:41:07.333 "data_size": 65536 00:41:07.333 }, 00:41:07.333 { 00:41:07.333 "name": "BaseBdev2", 00:41:07.333 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:07.333 "is_configured": true, 00:41:07.333 "data_offset": 0, 00:41:07.333 "data_size": 65536 00:41:07.333 }, 00:41:07.333 { 00:41:07.333 "name": "BaseBdev3", 00:41:07.333 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:07.333 "is_configured": true, 00:41:07.333 "data_offset": 0, 00:41:07.333 "data_size": 65536 00:41:07.333 } 00:41:07.333 ] 00:41:07.333 }' 00:41:07.333 19:34:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:07.333 19:34:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:07.333 19:34:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:07.333 19:34:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:07.333 19:34:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:08.709 "name": "raid_bdev1", 00:41:08.709 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:08.709 "strip_size_kb": 64, 00:41:08.709 "state": "online", 00:41:08.709 "raid_level": "raid5f", 00:41:08.709 "superblock": false, 00:41:08.709 "num_base_bdevs": 3, 00:41:08.709 "num_base_bdevs_discovered": 3, 00:41:08.709 "num_base_bdevs_operational": 3, 00:41:08.709 "process": { 00:41:08.709 "type": "rebuild", 00:41:08.709 "target": "spare", 00:41:08.709 "progress": { 00:41:08.709 "blocks": 90112, 00:41:08.709 "percent": 68 00:41:08.709 } 00:41:08.709 }, 00:41:08.709 "base_bdevs_list": [ 00:41:08.709 { 00:41:08.709 "name": "spare", 00:41:08.709 "uuid": "dcee5ac4-0fb0-5198-9b27-a3a5fe508bf4", 00:41:08.709 "is_configured": true, 00:41:08.709 "data_offset": 0, 00:41:08.709 "data_size": 65536 00:41:08.709 }, 00:41:08.709 { 00:41:08.709 "name": "BaseBdev2", 00:41:08.709 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:08.709 "is_configured": true, 00:41:08.709 "data_offset": 0, 00:41:08.709 "data_size": 65536 00:41:08.709 }, 00:41:08.709 { 00:41:08.709 "name": "BaseBdev3", 00:41:08.709 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:08.709 "is_configured": true, 00:41:08.709 "data_offset": 0, 00:41:08.709 "data_size": 65536 00:41:08.709 } 00:41:08.709 ] 00:41:08.709 }' 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:08.709 19:34:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:10.083 "name": "raid_bdev1", 00:41:10.083 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:10.083 "strip_size_kb": 64, 00:41:10.083 "state": "online", 00:41:10.083 "raid_level": "raid5f", 00:41:10.083 "superblock": false, 00:41:10.083 "num_base_bdevs": 3, 00:41:10.083 "num_base_bdevs_discovered": 3, 00:41:10.083 "num_base_bdevs_operational": 3, 00:41:10.083 "process": { 00:41:10.083 "type": "rebuild", 00:41:10.083 "target": "spare", 00:41:10.083 "progress": { 00:41:10.083 "blocks": 118784, 00:41:10.083 "percent": 90 00:41:10.083 } 00:41:10.083 }, 00:41:10.083 "base_bdevs_list": [ 00:41:10.083 { 00:41:10.083 "name": "spare", 00:41:10.083 "uuid": "dcee5ac4-0fb0-5198-9b27-a3a5fe508bf4", 00:41:10.083 "is_configured": true, 00:41:10.083 "data_offset": 0, 00:41:10.083 "data_size": 65536 00:41:10.083 }, 00:41:10.083 { 00:41:10.083 "name": "BaseBdev2", 00:41:10.083 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:10.083 "is_configured": true, 00:41:10.083 "data_offset": 0, 00:41:10.083 "data_size": 65536 00:41:10.083 }, 00:41:10.083 { 00:41:10.083 "name": "BaseBdev3", 00:41:10.083 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:10.083 "is_configured": true, 00:41:10.083 "data_offset": 0, 00:41:10.083 "data_size": 65536 00:41:10.083 } 00:41:10.083 ] 00:41:10.083 }' 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:10.083 19:34:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:10.656 [2024-04-18 19:34:26.418862] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:10.656 [2024-04-18 19:34:26.418965] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:10.656 [2024-04-18 19:34:26.419058] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:11.223 19:34:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:11.223 19:34:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:11.223 19:34:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:11.223 19:34:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:11.223 19:34:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:11.223 19:34:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:11.223 19:34:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:11.223 19:34:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:11.481 "name": "raid_bdev1", 00:41:11.481 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:11.481 "strip_size_kb": 64, 00:41:11.481 "state": "online", 00:41:11.481 "raid_level": "raid5f", 00:41:11.481 "superblock": false, 00:41:11.481 "num_base_bdevs": 3, 00:41:11.481 "num_base_bdevs_discovered": 3, 00:41:11.481 "num_base_bdevs_operational": 3, 00:41:11.481 "base_bdevs_list": [ 00:41:11.481 { 00:41:11.481 "name": "spare", 00:41:11.481 "uuid": "dcee5ac4-0fb0-5198-9b27-a3a5fe508bf4", 00:41:11.481 "is_configured": true, 00:41:11.481 "data_offset": 0, 00:41:11.481 "data_size": 65536 00:41:11.481 }, 00:41:11.481 { 00:41:11.481 "name": "BaseBdev2", 00:41:11.481 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:11.481 "is_configured": true, 00:41:11.481 "data_offset": 0, 00:41:11.481 "data_size": 65536 00:41:11.481 }, 00:41:11.481 { 00:41:11.481 "name": "BaseBdev3", 00:41:11.481 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:11.481 "is_configured": true, 00:41:11.481 "data_offset": 0, 00:41:11.481 "data_size": 65536 00:41:11.481 } 00:41:11.481 ] 00:41:11.481 }' 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@660 -- # break 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:11.481 19:34:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:11.739 19:34:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:11.739 "name": "raid_bdev1", 00:41:11.739 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:11.739 "strip_size_kb": 64, 00:41:11.739 "state": "online", 00:41:11.739 "raid_level": "raid5f", 00:41:11.739 "superblock": false, 00:41:11.739 "num_base_bdevs": 3, 00:41:11.739 "num_base_bdevs_discovered": 3, 00:41:11.739 "num_base_bdevs_operational": 3, 00:41:11.739 "base_bdevs_list": [ 00:41:11.739 { 00:41:11.739 "name": "spare", 00:41:11.739 "uuid": "dcee5ac4-0fb0-5198-9b27-a3a5fe508bf4", 00:41:11.739 "is_configured": true, 00:41:11.739 "data_offset": 0, 00:41:11.739 "data_size": 65536 00:41:11.739 }, 00:41:11.739 { 00:41:11.739 "name": "BaseBdev2", 00:41:11.739 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:11.739 "is_configured": true, 00:41:11.739 "data_offset": 0, 00:41:11.739 "data_size": 65536 00:41:11.739 }, 00:41:11.739 { 00:41:11.739 "name": "BaseBdev3", 00:41:11.739 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:11.739 "is_configured": true, 00:41:11.739 "data_offset": 0, 00:41:11.739 "data_size": 65536 00:41:11.739 } 00:41:11.739 ] 00:41:11.739 }' 00:41:11.739 19:34:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:11.739 19:34:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:11.739 19:34:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:11.998 19:34:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:12.257 19:34:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:12.257 "name": "raid_bdev1", 00:41:12.257 "uuid": "ccbd49bc-3fae-4e78-80de-1a532cb7f4b6", 00:41:12.257 "strip_size_kb": 64, 00:41:12.257 "state": "online", 00:41:12.257 "raid_level": "raid5f", 00:41:12.257 "superblock": false, 00:41:12.257 "num_base_bdevs": 3, 00:41:12.257 "num_base_bdevs_discovered": 3, 00:41:12.257 "num_base_bdevs_operational": 3, 00:41:12.257 "base_bdevs_list": [ 00:41:12.257 { 00:41:12.257 "name": "spare", 00:41:12.257 "uuid": "dcee5ac4-0fb0-5198-9b27-a3a5fe508bf4", 00:41:12.257 "is_configured": true, 00:41:12.257 "data_offset": 0, 00:41:12.257 "data_size": 65536 00:41:12.257 }, 00:41:12.257 { 00:41:12.257 "name": "BaseBdev2", 00:41:12.257 "uuid": "71cbcaa4-bd19-4704-acb2-b4f8f4c98139", 00:41:12.257 "is_configured": true, 00:41:12.257 "data_offset": 0, 00:41:12.257 "data_size": 65536 00:41:12.257 }, 00:41:12.257 { 00:41:12.257 "name": "BaseBdev3", 00:41:12.257 "uuid": "3d428980-0bf2-4220-9cd6-db1c1e29886b", 00:41:12.257 "is_configured": true, 00:41:12.257 "data_offset": 0, 00:41:12.257 "data_size": 65536 00:41:12.257 } 00:41:12.257 ] 00:41:12.257 }' 00:41:12.257 19:34:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:12.257 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:41:12.824 19:34:28 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:13.082 [2024-04-18 19:34:28.942154] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:13.082 [2024-04-18 19:34:28.942204] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:13.082 [2024-04-18 19:34:28.942303] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:13.082 [2024-04-18 19:34:28.942393] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:13.082 [2024-04-18 19:34:28.942407] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:41:13.082 19:34:28 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:13.082 19:34:28 -- bdev/bdev_raid.sh@671 -- # jq length 00:41:13.340 19:34:29 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:41:13.340 19:34:29 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:41:13.340 19:34:29 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:41:13.340 19:34:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:13.340 19:34:29 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:41:13.340 19:34:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:13.340 19:34:29 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:41:13.340 19:34:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:13.340 19:34:29 -- bdev/nbd_common.sh@12 -- # local i 00:41:13.340 19:34:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:13.340 19:34:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:13.340 19:34:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:41:13.908 /dev/nbd0 00:41:13.908 19:34:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:13.908 19:34:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:13.908 19:34:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:41:13.908 19:34:29 -- common/autotest_common.sh@855 -- # local i 00:41:13.908 19:34:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:41:13.908 19:34:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:41:13.908 19:34:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:41:13.908 19:34:29 -- common/autotest_common.sh@859 -- # break 00:41:13.908 19:34:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:13.908 19:34:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:13.908 19:34:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:13.908 1+0 records in 00:41:13.908 1+0 records out 00:41:13.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375044 s, 10.9 MB/s 00:41:13.908 19:34:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:13.908 19:34:29 -- common/autotest_common.sh@872 -- # size=4096 00:41:13.908 19:34:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:13.908 19:34:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:41:13.908 19:34:29 -- common/autotest_common.sh@875 -- # return 0 00:41:13.908 19:34:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:13.908 19:34:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:13.908 19:34:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:41:14.166 /dev/nbd1 00:41:14.166 19:34:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:14.166 19:34:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:14.166 19:34:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:41:14.166 19:34:29 -- common/autotest_common.sh@855 -- # local i 00:41:14.166 19:34:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:41:14.166 19:34:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:41:14.166 19:34:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:41:14.166 19:34:29 -- common/autotest_common.sh@859 -- # break 00:41:14.166 19:34:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:14.166 19:34:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:14.166 19:34:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:14.166 1+0 records in 00:41:14.166 1+0 records out 00:41:14.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450966 s, 9.1 MB/s 00:41:14.166 19:34:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:14.166 19:34:29 -- common/autotest_common.sh@872 -- # size=4096 00:41:14.166 19:34:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:14.166 19:34:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:41:14.166 19:34:29 -- common/autotest_common.sh@875 -- # return 0 00:41:14.166 19:34:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:14.166 19:34:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:14.166 19:34:29 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:41:14.425 19:34:30 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:41:14.425 19:34:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:14.425 19:34:30 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:41:14.425 19:34:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:14.425 19:34:30 -- bdev/nbd_common.sh@51 -- # local i 00:41:14.425 19:34:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:14.425 19:34:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@41 -- # break 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@45 -- # return 0 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:14.683 19:34:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:41:14.941 19:34:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:14.941 19:34:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:14.941 19:34:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:14.941 19:34:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:14.941 19:34:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:14.941 19:34:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:14.941 19:34:30 -- bdev/nbd_common.sh@41 -- # break 00:41:14.941 19:34:30 -- bdev/nbd_common.sh@45 -- # return 0 00:41:14.941 19:34:30 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:41:14.941 19:34:30 -- bdev/bdev_raid.sh@709 -- # killprocess 139185 00:41:14.941 19:34:30 -- common/autotest_common.sh@936 -- # '[' -z 139185 ']' 00:41:14.941 19:34:30 -- common/autotest_common.sh@940 -- # kill -0 139185 00:41:14.941 19:34:30 -- common/autotest_common.sh@941 -- # uname 00:41:14.941 19:34:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:41:14.941 19:34:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139185 00:41:14.941 19:34:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:41:14.941 killing process with pid 139185 00:41:14.941 19:34:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:41:14.941 19:34:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139185' 00:41:14.941 19:34:30 -- common/autotest_common.sh@955 -- # kill 139185 00:41:14.941 19:34:30 -- common/autotest_common.sh@960 -- # wait 139185 00:41:14.941 Received shutdown signal, test time was about 60.000000 seconds 00:41:14.941 00:41:14.941 Latency(us) 00:41:14.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.941 =================================================================================================================== 00:41:14.941 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:14.941 [2024-04-18 19:34:30.723168] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:15.509 [2024-04-18 19:34:31.191905] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:16.886 19:34:32 -- bdev/bdev_raid.sh@711 -- # return 0 00:41:16.886 00:41:16.886 real 0m23.134s 00:41:16.886 user 0m34.664s 00:41:16.886 sys 0m2.927s 00:41:16.886 19:34:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:16.886 19:34:32 -- common/autotest_common.sh@10 -- # set +x 00:41:16.886 ************************************ 00:41:16.886 END TEST raid5f_rebuild_test 00:41:16.886 ************************************ 00:41:16.886 19:34:32 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:41:16.886 19:34:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:41:16.886 19:34:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:16.886 19:34:32 -- common/autotest_common.sh@10 -- # set +x 00:41:16.886 ************************************ 00:41:16.886 START TEST raid5f_rebuild_test_sb 00:41:16.886 ************************************ 00:41:16.886 19:34:32 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 3 true false 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@544 -- # raid_pid=139796 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139796 /var/tmp/spdk-raid.sock 00:41:16.887 19:34:32 -- common/autotest_common.sh@817 -- # '[' -z 139796 ']' 00:41:16.887 19:34:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:16.887 19:34:32 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:16.887 19:34:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:41:16.887 19:34:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:16.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:16.887 19:34:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:41:16.887 19:34:32 -- common/autotest_common.sh@10 -- # set +x 00:41:17.146 [2024-04-18 19:34:32.851211] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:41:17.146 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:17.146 Zero copy mechanism will not be used. 00:41:17.146 [2024-04-18 19:34:32.851352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139796 ] 00:41:17.146 [2024-04-18 19:34:33.010909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:17.404 [2024-04-18 19:34:33.246491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:17.663 [2024-04-18 19:34:33.497702] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:17.922 19:34:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:41:17.922 19:34:33 -- common/autotest_common.sh@850 -- # return 0 00:41:17.922 19:34:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:17.922 19:34:33 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:41:17.923 19:34:33 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:41:18.216 BaseBdev1_malloc 00:41:18.216 19:34:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:18.476 [2024-04-18 19:34:34.301714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:18.476 [2024-04-18 19:34:34.301851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:18.476 [2024-04-18 19:34:34.301891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:41:18.476 [2024-04-18 19:34:34.301944] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:18.476 [2024-04-18 19:34:34.304767] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:18.476 [2024-04-18 19:34:34.304843] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:18.476 BaseBdev1 00:41:18.476 19:34:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:18.476 19:34:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:41:18.476 19:34:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:41:19.045 BaseBdev2_malloc 00:41:19.045 19:34:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:19.303 [2024-04-18 19:34:34.985902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:19.303 [2024-04-18 19:34:34.986003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:19.303 [2024-04-18 19:34:34.986053] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:41:19.303 [2024-04-18 19:34:34.986109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:19.303 [2024-04-18 19:34:34.988709] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:19.303 [2024-04-18 19:34:34.988771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:19.303 BaseBdev2 00:41:19.303 19:34:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:19.303 19:34:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:41:19.303 19:34:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:41:19.562 BaseBdev3_malloc 00:41:19.562 19:34:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:41:19.562 [2024-04-18 19:34:35.456165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:41:19.562 [2024-04-18 19:34:35.456270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:19.562 [2024-04-18 19:34:35.456312] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:41:19.562 [2024-04-18 19:34:35.456355] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:19.562 [2024-04-18 19:34:35.458923] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:19.562 [2024-04-18 19:34:35.458998] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:41:19.562 BaseBdev3 00:41:19.562 19:34:35 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:41:20.128 spare_malloc 00:41:20.128 19:34:35 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:20.129 spare_delay 00:41:20.129 19:34:36 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:20.387 [2024-04-18 19:34:36.303732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:20.387 [2024-04-18 19:34:36.303837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:20.387 [2024-04-18 19:34:36.303875] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:41:20.387 [2024-04-18 19:34:36.303918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:20.387 [2024-04-18 19:34:36.306533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:20.387 [2024-04-18 19:34:36.306611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:20.387 spare 00:41:20.646 19:34:36 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:41:20.646 [2024-04-18 19:34:36.555881] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:20.646 [2024-04-18 19:34:36.558101] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:20.646 [2024-04-18 19:34:36.558196] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:20.646 [2024-04-18 19:34:36.558428] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:41:20.646 [2024-04-18 19:34:36.558448] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:41:20.646 [2024-04-18 19:34:36.558625] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:41:20.646 [2024-04-18 19:34:36.566212] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:41:20.646 [2024-04-18 19:34:36.566251] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:41:20.646 [2024-04-18 19:34:36.566512] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:20.905 "name": "raid_bdev1", 00:41:20.905 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:20.905 "strip_size_kb": 64, 00:41:20.905 "state": "online", 00:41:20.905 "raid_level": "raid5f", 00:41:20.905 "superblock": true, 00:41:20.905 "num_base_bdevs": 3, 00:41:20.905 "num_base_bdevs_discovered": 3, 00:41:20.905 "num_base_bdevs_operational": 3, 00:41:20.905 "base_bdevs_list": [ 00:41:20.905 { 00:41:20.905 "name": "BaseBdev1", 00:41:20.905 "uuid": "e067673f-ff68-5511-af41-c0afc763e54c", 00:41:20.905 "is_configured": true, 00:41:20.905 "data_offset": 2048, 00:41:20.905 "data_size": 63488 00:41:20.905 }, 00:41:20.905 { 00:41:20.905 "name": "BaseBdev2", 00:41:20.905 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:20.905 "is_configured": true, 00:41:20.905 "data_offset": 2048, 00:41:20.905 "data_size": 63488 00:41:20.905 }, 00:41:20.905 { 00:41:20.905 "name": "BaseBdev3", 00:41:20.905 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:20.905 "is_configured": true, 00:41:20.905 "data_offset": 2048, 00:41:20.905 "data_size": 63488 00:41:20.905 } 00:41:20.905 ] 00:41:20.905 }' 00:41:20.905 19:34:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:20.905 19:34:36 -- common/autotest_common.sh@10 -- # set +x 00:41:21.838 19:34:37 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:21.838 19:34:37 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:41:21.838 [2024-04-18 19:34:37.746523] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:21.838 19:34:37 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:41:22.096 19:34:37 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:22.096 19:34:37 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:22.096 19:34:37 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:41:22.096 19:34:37 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:41:22.096 19:34:37 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:41:22.096 19:34:37 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:41:22.096 19:34:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:22.096 19:34:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:41:22.096 19:34:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:22.096 19:34:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:41:22.096 19:34:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:22.096 19:34:37 -- bdev/nbd_common.sh@12 -- # local i 00:41:22.096 19:34:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:22.096 19:34:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:22.096 19:34:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:41:22.353 [2024-04-18 19:34:38.222566] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:41:22.353 /dev/nbd0 00:41:22.611 19:34:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:22.611 19:34:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:22.611 19:34:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:41:22.611 19:34:38 -- common/autotest_common.sh@855 -- # local i 00:41:22.611 19:34:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:41:22.611 19:34:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:41:22.611 19:34:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:41:22.611 19:34:38 -- common/autotest_common.sh@859 -- # break 00:41:22.611 19:34:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:22.611 19:34:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:22.611 19:34:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:22.611 1+0 records in 00:41:22.611 1+0 records out 00:41:22.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285425 s, 14.4 MB/s 00:41:22.611 19:34:38 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:22.611 19:34:38 -- common/autotest_common.sh@872 -- # size=4096 00:41:22.611 19:34:38 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:22.611 19:34:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:41:22.611 19:34:38 -- common/autotest_common.sh@875 -- # return 0 00:41:22.611 19:34:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:22.611 19:34:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:22.611 19:34:38 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:41:22.611 19:34:38 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:41:22.611 19:34:38 -- bdev/bdev_raid.sh@582 -- # echo 128 00:41:22.611 19:34:38 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:41:22.869 496+0 records in 00:41:22.869 496+0 records out 00:41:22.869 65011712 bytes (65 MB, 62 MiB) copied, 0.434149 s, 150 MB/s 00:41:22.869 19:34:38 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:41:22.869 19:34:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:22.869 19:34:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:41:22.869 19:34:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:22.869 19:34:38 -- bdev/nbd_common.sh@51 -- # local i 00:41:22.869 19:34:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:22.869 19:34:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:41:23.127 19:34:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:23.127 [2024-04-18 19:34:39.039020] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:23.127 19:34:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:23.127 19:34:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:23.127 19:34:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:23.127 19:34:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:23.127 19:34:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:23.127 19:34:39 -- bdev/nbd_common.sh@41 -- # break 00:41:23.127 19:34:39 -- bdev/nbd_common.sh@45 -- # return 0 00:41:23.127 19:34:39 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:41:23.385 [2024-04-18 19:34:39.254468] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:23.385 19:34:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:23.643 19:34:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:23.643 "name": "raid_bdev1", 00:41:23.643 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:23.643 "strip_size_kb": 64, 00:41:23.643 "state": "online", 00:41:23.643 "raid_level": "raid5f", 00:41:23.643 "superblock": true, 00:41:23.643 "num_base_bdevs": 3, 00:41:23.643 "num_base_bdevs_discovered": 2, 00:41:23.643 "num_base_bdevs_operational": 2, 00:41:23.643 "base_bdevs_list": [ 00:41:23.643 { 00:41:23.643 "name": null, 00:41:23.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:23.643 "is_configured": false, 00:41:23.643 "data_offset": 2048, 00:41:23.643 "data_size": 63488 00:41:23.643 }, 00:41:23.643 { 00:41:23.643 "name": "BaseBdev2", 00:41:23.643 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:23.643 "is_configured": true, 00:41:23.643 "data_offset": 2048, 00:41:23.643 "data_size": 63488 00:41:23.643 }, 00:41:23.643 { 00:41:23.643 "name": "BaseBdev3", 00:41:23.643 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:23.643 "is_configured": true, 00:41:23.643 "data_offset": 2048, 00:41:23.643 "data_size": 63488 00:41:23.643 } 00:41:23.643 ] 00:41:23.643 }' 00:41:23.643 19:34:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:23.643 19:34:39 -- common/autotest_common.sh@10 -- # set +x 00:41:24.578 19:34:40 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:24.578 [2024-04-18 19:34:40.434743] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:41:24.578 [2024-04-18 19:34:40.434821] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:24.578 [2024-04-18 19:34:40.454531] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002acc0 00:41:24.578 [2024-04-18 19:34:40.463762] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:24.578 19:34:40 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:25.984 "name": "raid_bdev1", 00:41:25.984 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:25.984 "strip_size_kb": 64, 00:41:25.984 "state": "online", 00:41:25.984 "raid_level": "raid5f", 00:41:25.984 "superblock": true, 00:41:25.984 "num_base_bdevs": 3, 00:41:25.984 "num_base_bdevs_discovered": 3, 00:41:25.984 "num_base_bdevs_operational": 3, 00:41:25.984 "process": { 00:41:25.984 "type": "rebuild", 00:41:25.984 "target": "spare", 00:41:25.984 "progress": { 00:41:25.984 "blocks": 22528, 00:41:25.984 "percent": 17 00:41:25.984 } 00:41:25.984 }, 00:41:25.984 "base_bdevs_list": [ 00:41:25.984 { 00:41:25.984 "name": "spare", 00:41:25.984 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:25.984 "is_configured": true, 00:41:25.984 "data_offset": 2048, 00:41:25.984 "data_size": 63488 00:41:25.984 }, 00:41:25.984 { 00:41:25.984 "name": "BaseBdev2", 00:41:25.984 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:25.984 "is_configured": true, 00:41:25.984 "data_offset": 2048, 00:41:25.984 "data_size": 63488 00:41:25.984 }, 00:41:25.984 { 00:41:25.984 "name": "BaseBdev3", 00:41:25.984 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:25.984 "is_configured": true, 00:41:25.984 "data_offset": 2048, 00:41:25.984 "data_size": 63488 00:41:25.984 } 00:41:25.984 ] 00:41:25.984 }' 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:25.984 19:34:41 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:41:26.242 [2024-04-18 19:34:42.013184] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:26.242 [2024-04-18 19:34:42.081484] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:26.242 [2024-04-18 19:34:42.081605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:26.242 19:34:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:26.500 19:34:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:26.500 "name": "raid_bdev1", 00:41:26.500 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:26.500 "strip_size_kb": 64, 00:41:26.500 "state": "online", 00:41:26.500 "raid_level": "raid5f", 00:41:26.500 "superblock": true, 00:41:26.500 "num_base_bdevs": 3, 00:41:26.500 "num_base_bdevs_discovered": 2, 00:41:26.500 "num_base_bdevs_operational": 2, 00:41:26.500 "base_bdevs_list": [ 00:41:26.500 { 00:41:26.500 "name": null, 00:41:26.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:26.500 "is_configured": false, 00:41:26.500 "data_offset": 2048, 00:41:26.500 "data_size": 63488 00:41:26.500 }, 00:41:26.500 { 00:41:26.500 "name": "BaseBdev2", 00:41:26.500 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:26.500 "is_configured": true, 00:41:26.500 "data_offset": 2048, 00:41:26.500 "data_size": 63488 00:41:26.500 }, 00:41:26.500 { 00:41:26.500 "name": "BaseBdev3", 00:41:26.500 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:26.500 "is_configured": true, 00:41:26.500 "data_offset": 2048, 00:41:26.500 "data_size": 63488 00:41:26.500 } 00:41:26.500 ] 00:41:26.500 }' 00:41:26.500 19:34:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:26.500 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:41:27.435 19:34:43 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:27.435 19:34:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:27.435 19:34:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:27.435 19:34:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:27.435 19:34:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:27.435 19:34:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:27.435 19:34:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:27.435 19:34:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:27.435 "name": "raid_bdev1", 00:41:27.435 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:27.435 "strip_size_kb": 64, 00:41:27.435 "state": "online", 00:41:27.435 "raid_level": "raid5f", 00:41:27.435 "superblock": true, 00:41:27.435 "num_base_bdevs": 3, 00:41:27.435 "num_base_bdevs_discovered": 2, 00:41:27.435 "num_base_bdevs_operational": 2, 00:41:27.435 "base_bdevs_list": [ 00:41:27.435 { 00:41:27.435 "name": null, 00:41:27.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:27.435 "is_configured": false, 00:41:27.435 "data_offset": 2048, 00:41:27.435 "data_size": 63488 00:41:27.435 }, 00:41:27.435 { 00:41:27.435 "name": "BaseBdev2", 00:41:27.435 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:27.435 "is_configured": true, 00:41:27.435 "data_offset": 2048, 00:41:27.435 "data_size": 63488 00:41:27.435 }, 00:41:27.435 { 00:41:27.435 "name": "BaseBdev3", 00:41:27.435 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:27.435 "is_configured": true, 00:41:27.435 "data_offset": 2048, 00:41:27.435 "data_size": 63488 00:41:27.435 } 00:41:27.435 ] 00:41:27.435 }' 00:41:27.435 19:34:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:27.694 19:34:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:27.694 19:34:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:27.694 19:34:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:27.694 19:34:43 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:27.952 [2024-04-18 19:34:43.720955] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:41:27.952 [2024-04-18 19:34:43.721019] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:27.952 [2024-04-18 19:34:43.738981] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:41:27.952 [2024-04-18 19:34:43.748315] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:27.952 19:34:43 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:41:28.888 19:34:44 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:28.888 19:34:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:28.888 19:34:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:28.888 19:34:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:28.888 19:34:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:28.888 19:34:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:28.888 19:34:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:29.454 "name": "raid_bdev1", 00:41:29.454 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:29.454 "strip_size_kb": 64, 00:41:29.454 "state": "online", 00:41:29.454 "raid_level": "raid5f", 00:41:29.454 "superblock": true, 00:41:29.454 "num_base_bdevs": 3, 00:41:29.454 "num_base_bdevs_discovered": 3, 00:41:29.454 "num_base_bdevs_operational": 3, 00:41:29.454 "process": { 00:41:29.454 "type": "rebuild", 00:41:29.454 "target": "spare", 00:41:29.454 "progress": { 00:41:29.454 "blocks": 26624, 00:41:29.454 "percent": 20 00:41:29.454 } 00:41:29.454 }, 00:41:29.454 "base_bdevs_list": [ 00:41:29.454 { 00:41:29.454 "name": "spare", 00:41:29.454 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:29.454 "is_configured": true, 00:41:29.454 "data_offset": 2048, 00:41:29.454 "data_size": 63488 00:41:29.454 }, 00:41:29.454 { 00:41:29.454 "name": "BaseBdev2", 00:41:29.454 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:29.454 "is_configured": true, 00:41:29.454 "data_offset": 2048, 00:41:29.454 "data_size": 63488 00:41:29.454 }, 00:41:29.454 { 00:41:29.454 "name": "BaseBdev3", 00:41:29.454 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:29.454 "is_configured": true, 00:41:29.454 "data_offset": 2048, 00:41:29.454 "data_size": 63488 00:41:29.454 } 00:41:29.454 ] 00:41:29.454 }' 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:41:29.454 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@657 -- # local timeout=730 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:29.454 19:34:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:29.713 19:34:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:29.713 "name": "raid_bdev1", 00:41:29.713 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:29.713 "strip_size_kb": 64, 00:41:29.713 "state": "online", 00:41:29.713 "raid_level": "raid5f", 00:41:29.713 "superblock": true, 00:41:29.713 "num_base_bdevs": 3, 00:41:29.713 "num_base_bdevs_discovered": 3, 00:41:29.713 "num_base_bdevs_operational": 3, 00:41:29.713 "process": { 00:41:29.713 "type": "rebuild", 00:41:29.713 "target": "spare", 00:41:29.713 "progress": { 00:41:29.713 "blocks": 36864, 00:41:29.713 "percent": 29 00:41:29.713 } 00:41:29.713 }, 00:41:29.713 "base_bdevs_list": [ 00:41:29.713 { 00:41:29.713 "name": "spare", 00:41:29.713 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:29.713 "is_configured": true, 00:41:29.713 "data_offset": 2048, 00:41:29.713 "data_size": 63488 00:41:29.713 }, 00:41:29.713 { 00:41:29.713 "name": "BaseBdev2", 00:41:29.713 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:29.713 "is_configured": true, 00:41:29.713 "data_offset": 2048, 00:41:29.713 "data_size": 63488 00:41:29.713 }, 00:41:29.713 { 00:41:29.713 "name": "BaseBdev3", 00:41:29.713 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:29.713 "is_configured": true, 00:41:29.713 "data_offset": 2048, 00:41:29.713 "data_size": 63488 00:41:29.713 } 00:41:29.713 ] 00:41:29.713 }' 00:41:29.713 19:34:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:29.971 19:34:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:29.971 19:34:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:29.971 19:34:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:29.971 19:34:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:30.903 19:34:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:30.903 19:34:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:30.903 19:34:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:30.903 19:34:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:30.903 19:34:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:30.904 19:34:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:30.904 19:34:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:30.904 19:34:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:31.162 19:34:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:31.162 "name": "raid_bdev1", 00:41:31.162 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:31.162 "strip_size_kb": 64, 00:41:31.162 "state": "online", 00:41:31.162 "raid_level": "raid5f", 00:41:31.162 "superblock": true, 00:41:31.162 "num_base_bdevs": 3, 00:41:31.162 "num_base_bdevs_discovered": 3, 00:41:31.162 "num_base_bdevs_operational": 3, 00:41:31.162 "process": { 00:41:31.162 "type": "rebuild", 00:41:31.162 "target": "spare", 00:41:31.162 "progress": { 00:41:31.162 "blocks": 63488, 00:41:31.162 "percent": 50 00:41:31.162 } 00:41:31.162 }, 00:41:31.162 "base_bdevs_list": [ 00:41:31.162 { 00:41:31.162 "name": "spare", 00:41:31.162 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:31.162 "is_configured": true, 00:41:31.162 "data_offset": 2048, 00:41:31.162 "data_size": 63488 00:41:31.162 }, 00:41:31.162 { 00:41:31.162 "name": "BaseBdev2", 00:41:31.162 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:31.162 "is_configured": true, 00:41:31.162 "data_offset": 2048, 00:41:31.162 "data_size": 63488 00:41:31.162 }, 00:41:31.162 { 00:41:31.162 "name": "BaseBdev3", 00:41:31.162 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:31.162 "is_configured": true, 00:41:31.162 "data_offset": 2048, 00:41:31.162 "data_size": 63488 00:41:31.162 } 00:41:31.162 ] 00:41:31.162 }' 00:41:31.162 19:34:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:31.162 19:34:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:31.162 19:34:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:31.162 19:34:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:31.162 19:34:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:32.535 "name": "raid_bdev1", 00:41:32.535 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:32.535 "strip_size_kb": 64, 00:41:32.535 "state": "online", 00:41:32.535 "raid_level": "raid5f", 00:41:32.535 "superblock": true, 00:41:32.535 "num_base_bdevs": 3, 00:41:32.535 "num_base_bdevs_discovered": 3, 00:41:32.535 "num_base_bdevs_operational": 3, 00:41:32.535 "process": { 00:41:32.535 "type": "rebuild", 00:41:32.535 "target": "spare", 00:41:32.535 "progress": { 00:41:32.535 "blocks": 94208, 00:41:32.535 "percent": 74 00:41:32.535 } 00:41:32.535 }, 00:41:32.535 "base_bdevs_list": [ 00:41:32.535 { 00:41:32.535 "name": "spare", 00:41:32.535 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:32.535 "is_configured": true, 00:41:32.535 "data_offset": 2048, 00:41:32.535 "data_size": 63488 00:41:32.535 }, 00:41:32.535 { 00:41:32.535 "name": "BaseBdev2", 00:41:32.535 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:32.535 "is_configured": true, 00:41:32.535 "data_offset": 2048, 00:41:32.535 "data_size": 63488 00:41:32.535 }, 00:41:32.535 { 00:41:32.535 "name": "BaseBdev3", 00:41:32.535 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:32.535 "is_configured": true, 00:41:32.535 "data_offset": 2048, 00:41:32.535 "data_size": 63488 00:41:32.535 } 00:41:32.535 ] 00:41:32.535 }' 00:41:32.535 19:34:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:32.793 19:34:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:32.793 19:34:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:32.793 19:34:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:32.793 19:34:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:33.730 19:34:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:33.730 19:34:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:33.730 19:34:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:33.730 19:34:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:33.730 19:34:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:33.730 19:34:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:33.730 19:34:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:33.730 19:34:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:33.989 19:34:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:33.989 "name": "raid_bdev1", 00:41:33.989 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:33.989 "strip_size_kb": 64, 00:41:33.989 "state": "online", 00:41:33.989 "raid_level": "raid5f", 00:41:33.989 "superblock": true, 00:41:33.989 "num_base_bdevs": 3, 00:41:33.989 "num_base_bdevs_discovered": 3, 00:41:33.989 "num_base_bdevs_operational": 3, 00:41:33.989 "process": { 00:41:33.989 "type": "rebuild", 00:41:33.989 "target": "spare", 00:41:33.989 "progress": { 00:41:33.989 "blocks": 122880, 00:41:33.989 "percent": 96 00:41:33.989 } 00:41:33.989 }, 00:41:33.989 "base_bdevs_list": [ 00:41:33.989 { 00:41:33.989 "name": "spare", 00:41:33.989 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:33.989 "is_configured": true, 00:41:33.989 "data_offset": 2048, 00:41:33.989 "data_size": 63488 00:41:33.989 }, 00:41:33.989 { 00:41:33.989 "name": "BaseBdev2", 00:41:33.989 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:33.989 "is_configured": true, 00:41:33.989 "data_offset": 2048, 00:41:33.989 "data_size": 63488 00:41:33.989 }, 00:41:33.989 { 00:41:33.989 "name": "BaseBdev3", 00:41:33.989 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:33.989 "is_configured": true, 00:41:33.989 "data_offset": 2048, 00:41:33.989 "data_size": 63488 00:41:33.989 } 00:41:33.989 ] 00:41:33.989 }' 00:41:33.989 19:34:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:33.989 19:34:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:33.989 19:34:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:34.248 19:34:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:34.248 19:34:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:34.248 [2024-04-18 19:34:50.017647] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:34.248 [2024-04-18 19:34:50.017744] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:34.248 [2024-04-18 19:34:50.017910] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:35.202 19:34:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:35.202 19:34:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:35.202 19:34:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:35.202 19:34:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:35.202 19:34:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:35.202 19:34:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:35.202 19:34:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:35.202 19:34:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:35.499 "name": "raid_bdev1", 00:41:35.499 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:35.499 "strip_size_kb": 64, 00:41:35.499 "state": "online", 00:41:35.499 "raid_level": "raid5f", 00:41:35.499 "superblock": true, 00:41:35.499 "num_base_bdevs": 3, 00:41:35.499 "num_base_bdevs_discovered": 3, 00:41:35.499 "num_base_bdevs_operational": 3, 00:41:35.499 "base_bdevs_list": [ 00:41:35.499 { 00:41:35.499 "name": "spare", 00:41:35.499 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:35.499 "is_configured": true, 00:41:35.499 "data_offset": 2048, 00:41:35.499 "data_size": 63488 00:41:35.499 }, 00:41:35.499 { 00:41:35.499 "name": "BaseBdev2", 00:41:35.499 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:35.499 "is_configured": true, 00:41:35.499 "data_offset": 2048, 00:41:35.499 "data_size": 63488 00:41:35.499 }, 00:41:35.499 { 00:41:35.499 "name": "BaseBdev3", 00:41:35.499 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:35.499 "is_configured": true, 00:41:35.499 "data_offset": 2048, 00:41:35.499 "data_size": 63488 00:41:35.499 } 00:41:35.499 ] 00:41:35.499 }' 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@660 -- # break 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:35.499 19:34:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:35.756 19:34:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:35.756 "name": "raid_bdev1", 00:41:35.756 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:35.756 "strip_size_kb": 64, 00:41:35.756 "state": "online", 00:41:35.756 "raid_level": "raid5f", 00:41:35.756 "superblock": true, 00:41:35.756 "num_base_bdevs": 3, 00:41:35.756 "num_base_bdevs_discovered": 3, 00:41:35.756 "num_base_bdevs_operational": 3, 00:41:35.756 "base_bdevs_list": [ 00:41:35.756 { 00:41:35.756 "name": "spare", 00:41:35.756 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:35.756 "is_configured": true, 00:41:35.756 "data_offset": 2048, 00:41:35.756 "data_size": 63488 00:41:35.756 }, 00:41:35.756 { 00:41:35.756 "name": "BaseBdev2", 00:41:35.756 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:35.756 "is_configured": true, 00:41:35.757 "data_offset": 2048, 00:41:35.757 "data_size": 63488 00:41:35.757 }, 00:41:35.757 { 00:41:35.757 "name": "BaseBdev3", 00:41:35.757 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:35.757 "is_configured": true, 00:41:35.757 "data_offset": 2048, 00:41:35.757 "data_size": 63488 00:41:35.757 } 00:41:35.757 ] 00:41:35.757 }' 00:41:35.757 19:34:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:35.757 19:34:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:35.757 19:34:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:36.014 "name": "raid_bdev1", 00:41:36.014 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:36.014 "strip_size_kb": 64, 00:41:36.014 "state": "online", 00:41:36.014 "raid_level": "raid5f", 00:41:36.014 "superblock": true, 00:41:36.014 "num_base_bdevs": 3, 00:41:36.014 "num_base_bdevs_discovered": 3, 00:41:36.014 "num_base_bdevs_operational": 3, 00:41:36.014 "base_bdevs_list": [ 00:41:36.014 { 00:41:36.014 "name": "spare", 00:41:36.014 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:36.014 "is_configured": true, 00:41:36.014 "data_offset": 2048, 00:41:36.014 "data_size": 63488 00:41:36.014 }, 00:41:36.014 { 00:41:36.014 "name": "BaseBdev2", 00:41:36.014 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:36.014 "is_configured": true, 00:41:36.014 "data_offset": 2048, 00:41:36.014 "data_size": 63488 00:41:36.014 }, 00:41:36.014 { 00:41:36.014 "name": "BaseBdev3", 00:41:36.014 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:36.014 "is_configured": true, 00:41:36.014 "data_offset": 2048, 00:41:36.014 "data_size": 63488 00:41:36.014 } 00:41:36.014 ] 00:41:36.014 }' 00:41:36.014 19:34:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:36.014 19:34:51 -- common/autotest_common.sh@10 -- # set +x 00:41:36.947 19:34:52 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:36.947 [2024-04-18 19:34:52.832020] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:36.947 [2024-04-18 19:34:52.832071] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:36.947 [2024-04-18 19:34:52.832187] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:36.947 [2024-04-18 19:34:52.832275] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:36.947 [2024-04-18 19:34:52.832286] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:41:36.947 19:34:52 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:36.947 19:34:52 -- bdev/bdev_raid.sh@671 -- # jq length 00:41:37.205 19:34:53 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:41:37.205 19:34:53 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:41:37.205 19:34:53 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:41:37.205 19:34:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:37.205 19:34:53 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:41:37.205 19:34:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:37.205 19:34:53 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:41:37.205 19:34:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:37.205 19:34:53 -- bdev/nbd_common.sh@12 -- # local i 00:41:37.205 19:34:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:37.205 19:34:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:37.205 19:34:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:41:37.461 /dev/nbd0 00:41:37.719 19:34:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:37.719 19:34:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:37.719 19:34:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:41:37.719 19:34:53 -- common/autotest_common.sh@855 -- # local i 00:41:37.719 19:34:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:41:37.719 19:34:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:41:37.719 19:34:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:41:37.719 19:34:53 -- common/autotest_common.sh@859 -- # break 00:41:37.719 19:34:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:37.719 19:34:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:37.719 19:34:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:37.719 1+0 records in 00:41:37.719 1+0 records out 00:41:37.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299723 s, 13.7 MB/s 00:41:37.719 19:34:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:37.719 19:34:53 -- common/autotest_common.sh@872 -- # size=4096 00:41:37.719 19:34:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:37.719 19:34:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:41:37.719 19:34:53 -- common/autotest_common.sh@875 -- # return 0 00:41:37.719 19:34:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:37.719 19:34:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:37.719 19:34:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:41:37.977 /dev/nbd1 00:41:37.977 19:34:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:37.977 19:34:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:37.977 19:34:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:41:37.977 19:34:53 -- common/autotest_common.sh@855 -- # local i 00:41:37.977 19:34:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:41:37.977 19:34:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:41:37.977 19:34:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:41:37.977 19:34:53 -- common/autotest_common.sh@859 -- # break 00:41:37.977 19:34:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:37.977 19:34:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:37.977 19:34:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:37.977 1+0 records in 00:41:37.977 1+0 records out 00:41:37.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575736 s, 7.1 MB/s 00:41:37.977 19:34:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:37.977 19:34:53 -- common/autotest_common.sh@872 -- # size=4096 00:41:37.977 19:34:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:37.977 19:34:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:41:37.977 19:34:53 -- common/autotest_common.sh@875 -- # return 0 00:41:37.977 19:34:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:37.977 19:34:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:37.977 19:34:53 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:41:38.236 19:34:53 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:41:38.236 19:34:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:38.236 19:34:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:41:38.236 19:34:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:38.236 19:34:53 -- bdev/nbd_common.sh@51 -- # local i 00:41:38.236 19:34:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:38.236 19:34:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@41 -- # break 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@45 -- # return 0 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:38.494 19:34:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:41:38.753 19:34:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:38.753 19:34:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:38.753 19:34:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:38.753 19:34:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:38.753 19:34:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:38.753 19:34:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:38.753 19:34:54 -- bdev/nbd_common.sh@41 -- # break 00:41:38.753 19:34:54 -- bdev/nbd_common.sh@45 -- # return 0 00:41:38.753 19:34:54 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:41:38.753 19:34:54 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:41:38.753 19:34:54 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:41:38.753 19:34:54 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:41:39.011 19:34:54 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:39.282 [2024-04-18 19:34:55.064016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:39.282 [2024-04-18 19:34:55.064136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:39.282 [2024-04-18 19:34:55.064173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:41:39.282 [2024-04-18 19:34:55.064206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:39.282 [2024-04-18 19:34:55.067011] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:39.282 [2024-04-18 19:34:55.067129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:39.282 [2024-04-18 19:34:55.067283] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:41:39.282 [2024-04-18 19:34:55.067385] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:39.282 BaseBdev1 00:41:39.282 19:34:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:41:39.282 19:34:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:41:39.282 19:34:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:41:39.573 19:34:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:39.831 [2024-04-18 19:34:55.672223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:39.831 [2024-04-18 19:34:55.672349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:39.831 [2024-04-18 19:34:55.672410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:41:39.831 [2024-04-18 19:34:55.672449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:39.831 [2024-04-18 19:34:55.673166] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:39.831 [2024-04-18 19:34:55.673263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:39.831 [2024-04-18 19:34:55.673465] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:41:39.831 [2024-04-18 19:34:55.673500] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:41:39.831 [2024-04-18 19:34:55.673515] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:39.831 [2024-04-18 19:34:55.673548] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:41:39.831 [2024-04-18 19:34:55.673649] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:39.831 BaseBdev2 00:41:39.831 19:34:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:41:39.831 19:34:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:41:39.831 19:34:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:41:40.089 19:34:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:41:40.348 [2024-04-18 19:34:56.188261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:41:40.348 [2024-04-18 19:34:56.188391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:40.348 [2024-04-18 19:34:56.188469] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:41:40.348 [2024-04-18 19:34:56.188501] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:40.348 [2024-04-18 19:34:56.189193] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:40.348 [2024-04-18 19:34:56.189272] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:41:40.348 [2024-04-18 19:34:56.189463] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:41:40.348 [2024-04-18 19:34:56.189499] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:40.348 BaseBdev3 00:41:40.348 19:34:56 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:41:40.917 19:34:56 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:40.917 [2024-04-18 19:34:56.816399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:40.917 [2024-04-18 19:34:56.816514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:40.917 [2024-04-18 19:34:56.816572] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:41:40.917 [2024-04-18 19:34:56.816616] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:40.917 [2024-04-18 19:34:56.817247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:40.917 [2024-04-18 19:34:56.817326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:40.917 [2024-04-18 19:34:56.817520] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:41:40.917 [2024-04-18 19:34:56.817580] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:40.917 spare 00:41:40.917 19:34:56 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:40.917 19:34:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:40.917 19:34:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:40.917 19:34:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:40.917 19:34:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:40.917 19:34:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:41:40.918 19:34:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:40.918 19:34:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:40.918 19:34:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:40.918 19:34:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:40.918 19:34:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:40.918 19:34:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:41.176 [2024-04-18 19:34:56.917713] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:41:41.176 [2024-04-18 19:34:56.917753] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:41:41.176 [2024-04-18 19:34:56.917908] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004b590 00:41:41.176 [2024-04-18 19:34:56.924685] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:41:41.176 [2024-04-18 19:34:56.924717] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:41:41.176 [2024-04-18 19:34:56.924916] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:41.434 19:34:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:41.434 "name": "raid_bdev1", 00:41:41.434 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:41.434 "strip_size_kb": 64, 00:41:41.434 "state": "online", 00:41:41.434 "raid_level": "raid5f", 00:41:41.435 "superblock": true, 00:41:41.435 "num_base_bdevs": 3, 00:41:41.435 "num_base_bdevs_discovered": 3, 00:41:41.435 "num_base_bdevs_operational": 3, 00:41:41.435 "base_bdevs_list": [ 00:41:41.435 { 00:41:41.435 "name": "spare", 00:41:41.435 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:41.435 "is_configured": true, 00:41:41.435 "data_offset": 2048, 00:41:41.435 "data_size": 63488 00:41:41.435 }, 00:41:41.435 { 00:41:41.435 "name": "BaseBdev2", 00:41:41.435 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:41.435 "is_configured": true, 00:41:41.435 "data_offset": 2048, 00:41:41.435 "data_size": 63488 00:41:41.435 }, 00:41:41.435 { 00:41:41.435 "name": "BaseBdev3", 00:41:41.435 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:41.435 "is_configured": true, 00:41:41.435 "data_offset": 2048, 00:41:41.435 "data_size": 63488 00:41:41.435 } 00:41:41.435 ] 00:41:41.435 }' 00:41:41.435 19:34:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:41.435 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:41:42.002 19:34:57 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:42.002 19:34:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:42.002 19:34:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:42.002 19:34:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:42.002 19:34:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:42.002 19:34:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:42.002 19:34:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:42.260 19:34:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:42.260 "name": "raid_bdev1", 00:41:42.260 "uuid": "0886836b-5e8e-483e-9f36-5053997fe0e7", 00:41:42.260 "strip_size_kb": 64, 00:41:42.260 "state": "online", 00:41:42.260 "raid_level": "raid5f", 00:41:42.260 "superblock": true, 00:41:42.260 "num_base_bdevs": 3, 00:41:42.260 "num_base_bdevs_discovered": 3, 00:41:42.260 "num_base_bdevs_operational": 3, 00:41:42.260 "base_bdevs_list": [ 00:41:42.260 { 00:41:42.260 "name": "spare", 00:41:42.260 "uuid": "8b455ced-df0d-5947-946c-9d08ff562e56", 00:41:42.260 "is_configured": true, 00:41:42.260 "data_offset": 2048, 00:41:42.260 "data_size": 63488 00:41:42.260 }, 00:41:42.260 { 00:41:42.260 "name": "BaseBdev2", 00:41:42.260 "uuid": "a8f05196-ffa6-5d5b-aad6-22e69fa56dfc", 00:41:42.260 "is_configured": true, 00:41:42.260 "data_offset": 2048, 00:41:42.260 "data_size": 63488 00:41:42.260 }, 00:41:42.260 { 00:41:42.260 "name": "BaseBdev3", 00:41:42.260 "uuid": "56be9d77-d033-5c01-88b7-eba31e04b924", 00:41:42.260 "is_configured": true, 00:41:42.260 "data_offset": 2048, 00:41:42.260 "data_size": 63488 00:41:42.260 } 00:41:42.260 ] 00:41:42.260 }' 00:41:42.260 19:34:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:42.260 19:34:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:42.260 19:34:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:42.260 19:34:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:42.260 19:34:58 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:42.260 19:34:58 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:41:42.519 19:34:58 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:41:42.519 19:34:58 -- bdev/bdev_raid.sh@709 -- # killprocess 139796 00:41:42.519 19:34:58 -- common/autotest_common.sh@936 -- # '[' -z 139796 ']' 00:41:42.519 19:34:58 -- common/autotest_common.sh@940 -- # kill -0 139796 00:41:42.519 19:34:58 -- common/autotest_common.sh@941 -- # uname 00:41:42.519 19:34:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:41:42.519 19:34:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139796 00:41:42.778 killing process with pid 139796 00:41:42.778 Received shutdown signal, test time was about 60.000000 seconds 00:41:42.778 00:41:42.778 Latency(us) 00:41:42.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:42.778 =================================================================================================================== 00:41:42.778 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:42.778 19:34:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:41:42.778 19:34:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:41:42.778 19:34:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139796' 00:41:42.778 19:34:58 -- common/autotest_common.sh@955 -- # kill 139796 00:41:42.778 19:34:58 -- common/autotest_common.sh@960 -- # wait 139796 00:41:42.778 [2024-04-18 19:34:58.460094] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:42.778 [2024-04-18 19:34:58.460221] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:42.778 [2024-04-18 19:34:58.460316] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:42.778 [2024-04-18 19:34:58.460346] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:41:43.038 [2024-04-18 19:34:58.942292] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:45.002 ************************************ 00:41:45.002 END TEST raid5f_rebuild_test_sb 00:41:45.002 ************************************ 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@711 -- # return 0 00:41:45.002 00:41:45.002 real 0m27.755s 00:41:45.002 user 0m43.572s 00:41:45.002 sys 0m3.417s 00:41:45.002 19:35:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:45.002 19:35:00 -- common/autotest_common.sh@10 -- # set +x 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:41:45.002 19:35:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:41:45.002 19:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:45.002 19:35:00 -- common/autotest_common.sh@10 -- # set +x 00:41:45.002 ************************************ 00:41:45.002 START TEST raid5f_state_function_test 00:41:45.002 ************************************ 00:41:45.002 19:35:00 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 4 false 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=140513 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 140513' 00:41:45.002 Process raid pid: 140513 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 140513 /var/tmp/spdk-raid.sock 00:41:45.002 19:35:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:41:45.003 19:35:00 -- common/autotest_common.sh@817 -- # '[' -z 140513 ']' 00:41:45.003 19:35:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:45.003 19:35:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:41:45.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:45.003 19:35:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:45.003 19:35:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:41:45.003 19:35:00 -- common/autotest_common.sh@10 -- # set +x 00:41:45.003 [2024-04-18 19:35:00.693137] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:41:45.003 [2024-04-18 19:35:00.693299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:45.003 [2024-04-18 19:35:00.857448] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.261 [2024-04-18 19:35:01.081540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.519 [2024-04-18 19:35:01.285740] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:45.778 19:35:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:41:45.778 19:35:01 -- common/autotest_common.sh@850 -- # return 0 00:41:45.778 19:35:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:41:46.346 [2024-04-18 19:35:01.971216] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:46.346 [2024-04-18 19:35:01.971331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:46.346 [2024-04-18 19:35:01.971353] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:46.346 [2024-04-18 19:35:01.971402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:46.346 [2024-04-18 19:35:01.971414] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:41:46.346 [2024-04-18 19:35:01.971462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:41:46.346 [2024-04-18 19:35:01.971476] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:41:46.346 [2024-04-18 19:35:01.971512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:46.346 19:35:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:46.346 19:35:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:46.346 "name": "Existed_Raid", 00:41:46.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:46.346 "strip_size_kb": 64, 00:41:46.346 "state": "configuring", 00:41:46.346 "raid_level": "raid5f", 00:41:46.346 "superblock": false, 00:41:46.346 "num_base_bdevs": 4, 00:41:46.346 "num_base_bdevs_discovered": 0, 00:41:46.346 "num_base_bdevs_operational": 4, 00:41:46.346 "base_bdevs_list": [ 00:41:46.346 { 00:41:46.346 "name": "BaseBdev1", 00:41:46.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:46.346 "is_configured": false, 00:41:46.346 "data_offset": 0, 00:41:46.346 "data_size": 0 00:41:46.346 }, 00:41:46.346 { 00:41:46.346 "name": "BaseBdev2", 00:41:46.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:46.346 "is_configured": false, 00:41:46.346 "data_offset": 0, 00:41:46.346 "data_size": 0 00:41:46.346 }, 00:41:46.346 { 00:41:46.346 "name": "BaseBdev3", 00:41:46.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:46.346 "is_configured": false, 00:41:46.346 "data_offset": 0, 00:41:46.346 "data_size": 0 00:41:46.346 }, 00:41:46.346 { 00:41:46.346 "name": "BaseBdev4", 00:41:46.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:46.346 "is_configured": false, 00:41:46.346 "data_offset": 0, 00:41:46.346 "data_size": 0 00:41:46.346 } 00:41:46.346 ] 00:41:46.346 }' 00:41:46.346 19:35:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:46.346 19:35:02 -- common/autotest_common.sh@10 -- # set +x 00:41:47.282 19:35:03 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:41:47.540 [2024-04-18 19:35:03.311291] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:47.540 [2024-04-18 19:35:03.311341] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:41:47.540 19:35:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:41:47.798 [2024-04-18 19:35:03.583425] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:47.798 [2024-04-18 19:35:03.583503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:47.798 [2024-04-18 19:35:03.583513] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:47.798 [2024-04-18 19:35:03.583538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:47.798 [2024-04-18 19:35:03.583546] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:41:47.798 [2024-04-18 19:35:03.583580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:41:47.798 [2024-04-18 19:35:03.583587] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:41:47.798 [2024-04-18 19:35:03.583609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:41:47.798 19:35:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:41:48.056 [2024-04-18 19:35:03.862818] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:48.056 BaseBdev1 00:41:48.056 19:35:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:41:48.056 19:35:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:41:48.056 19:35:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:41:48.056 19:35:03 -- common/autotest_common.sh@887 -- # local i 00:41:48.056 19:35:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:41:48.056 19:35:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:41:48.056 19:35:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:41:48.323 19:35:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:41:48.586 [ 00:41:48.586 { 00:41:48.586 "name": "BaseBdev1", 00:41:48.586 "aliases": [ 00:41:48.586 "be3c283a-1ca8-48f7-97ee-976613e34b94" 00:41:48.586 ], 00:41:48.586 "product_name": "Malloc disk", 00:41:48.586 "block_size": 512, 00:41:48.586 "num_blocks": 65536, 00:41:48.586 "uuid": "be3c283a-1ca8-48f7-97ee-976613e34b94", 00:41:48.586 "assigned_rate_limits": { 00:41:48.586 "rw_ios_per_sec": 0, 00:41:48.586 "rw_mbytes_per_sec": 0, 00:41:48.586 "r_mbytes_per_sec": 0, 00:41:48.586 "w_mbytes_per_sec": 0 00:41:48.586 }, 00:41:48.586 "claimed": true, 00:41:48.586 "claim_type": "exclusive_write", 00:41:48.586 "zoned": false, 00:41:48.586 "supported_io_types": { 00:41:48.586 "read": true, 00:41:48.586 "write": true, 00:41:48.586 "unmap": true, 00:41:48.586 "write_zeroes": true, 00:41:48.586 "flush": true, 00:41:48.586 "reset": true, 00:41:48.586 "compare": false, 00:41:48.586 "compare_and_write": false, 00:41:48.586 "abort": true, 00:41:48.586 "nvme_admin": false, 00:41:48.586 "nvme_io": false 00:41:48.586 }, 00:41:48.586 "memory_domains": [ 00:41:48.586 { 00:41:48.586 "dma_device_id": "system", 00:41:48.586 "dma_device_type": 1 00:41:48.586 }, 00:41:48.586 { 00:41:48.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:48.586 "dma_device_type": 2 00:41:48.586 } 00:41:48.586 ], 00:41:48.586 "driver_specific": {} 00:41:48.586 } 00:41:48.586 ] 00:41:48.586 19:35:04 -- common/autotest_common.sh@893 -- # return 0 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:48.586 19:35:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:48.844 19:35:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:48.844 "name": "Existed_Raid", 00:41:48.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:48.844 "strip_size_kb": 64, 00:41:48.844 "state": "configuring", 00:41:48.844 "raid_level": "raid5f", 00:41:48.844 "superblock": false, 00:41:48.844 "num_base_bdevs": 4, 00:41:48.844 "num_base_bdevs_discovered": 1, 00:41:48.844 "num_base_bdevs_operational": 4, 00:41:48.844 "base_bdevs_list": [ 00:41:48.844 { 00:41:48.844 "name": "BaseBdev1", 00:41:48.844 "uuid": "be3c283a-1ca8-48f7-97ee-976613e34b94", 00:41:48.844 "is_configured": true, 00:41:48.844 "data_offset": 0, 00:41:48.844 "data_size": 65536 00:41:48.844 }, 00:41:48.844 { 00:41:48.844 "name": "BaseBdev2", 00:41:48.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:48.844 "is_configured": false, 00:41:48.844 "data_offset": 0, 00:41:48.844 "data_size": 0 00:41:48.844 }, 00:41:48.844 { 00:41:48.844 "name": "BaseBdev3", 00:41:48.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:48.844 "is_configured": false, 00:41:48.844 "data_offset": 0, 00:41:48.844 "data_size": 0 00:41:48.844 }, 00:41:48.844 { 00:41:48.844 "name": "BaseBdev4", 00:41:48.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:48.844 "is_configured": false, 00:41:48.844 "data_offset": 0, 00:41:48.844 "data_size": 0 00:41:48.844 } 00:41:48.844 ] 00:41:48.844 }' 00:41:48.844 19:35:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:48.844 19:35:04 -- common/autotest_common.sh@10 -- # set +x 00:41:49.408 19:35:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:41:49.667 [2024-04-18 19:35:05.355215] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:49.667 [2024-04-18 19:35:05.355279] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:41:49.667 [2024-04-18 19:35:05.559328] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:49.667 [2024-04-18 19:35:05.561402] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:49.667 [2024-04-18 19:35:05.561494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:49.667 [2024-04-18 19:35:05.561504] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:41:49.667 [2024-04-18 19:35:05.561529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:41:49.667 [2024-04-18 19:35:05.561538] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:41:49.667 [2024-04-18 19:35:05.561554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:49.667 19:35:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:49.926 19:35:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:49.926 "name": "Existed_Raid", 00:41:49.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:49.926 "strip_size_kb": 64, 00:41:49.926 "state": "configuring", 00:41:49.926 "raid_level": "raid5f", 00:41:49.926 "superblock": false, 00:41:49.926 "num_base_bdevs": 4, 00:41:49.926 "num_base_bdevs_discovered": 1, 00:41:49.926 "num_base_bdevs_operational": 4, 00:41:49.926 "base_bdevs_list": [ 00:41:49.926 { 00:41:49.926 "name": "BaseBdev1", 00:41:49.926 "uuid": "be3c283a-1ca8-48f7-97ee-976613e34b94", 00:41:49.926 "is_configured": true, 00:41:49.926 "data_offset": 0, 00:41:49.926 "data_size": 65536 00:41:49.926 }, 00:41:49.926 { 00:41:49.926 "name": "BaseBdev2", 00:41:49.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:49.926 "is_configured": false, 00:41:49.926 "data_offset": 0, 00:41:49.926 "data_size": 0 00:41:49.926 }, 00:41:49.926 { 00:41:49.926 "name": "BaseBdev3", 00:41:49.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:49.926 "is_configured": false, 00:41:49.926 "data_offset": 0, 00:41:49.926 "data_size": 0 00:41:49.926 }, 00:41:49.926 { 00:41:49.926 "name": "BaseBdev4", 00:41:49.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:49.926 "is_configured": false, 00:41:49.926 "data_offset": 0, 00:41:49.926 "data_size": 0 00:41:49.926 } 00:41:49.926 ] 00:41:49.926 }' 00:41:49.926 19:35:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:49.926 19:35:05 -- common/autotest_common.sh@10 -- # set +x 00:41:50.860 19:35:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:41:50.860 [2024-04-18 19:35:06.734435] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:50.860 BaseBdev2 00:41:50.860 19:35:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:41:50.860 19:35:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:41:50.860 19:35:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:41:50.860 19:35:06 -- common/autotest_common.sh@887 -- # local i 00:41:50.860 19:35:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:41:50.860 19:35:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:41:50.860 19:35:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:41:51.426 19:35:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:41:51.426 [ 00:41:51.426 { 00:41:51.426 "name": "BaseBdev2", 00:41:51.426 "aliases": [ 00:41:51.426 "01d8d2a5-417f-4317-8083-07b755f01b27" 00:41:51.426 ], 00:41:51.426 "product_name": "Malloc disk", 00:41:51.426 "block_size": 512, 00:41:51.426 "num_blocks": 65536, 00:41:51.426 "uuid": "01d8d2a5-417f-4317-8083-07b755f01b27", 00:41:51.426 "assigned_rate_limits": { 00:41:51.426 "rw_ios_per_sec": 0, 00:41:51.426 "rw_mbytes_per_sec": 0, 00:41:51.426 "r_mbytes_per_sec": 0, 00:41:51.426 "w_mbytes_per_sec": 0 00:41:51.426 }, 00:41:51.426 "claimed": true, 00:41:51.426 "claim_type": "exclusive_write", 00:41:51.426 "zoned": false, 00:41:51.426 "supported_io_types": { 00:41:51.426 "read": true, 00:41:51.426 "write": true, 00:41:51.426 "unmap": true, 00:41:51.426 "write_zeroes": true, 00:41:51.426 "flush": true, 00:41:51.426 "reset": true, 00:41:51.426 "compare": false, 00:41:51.426 "compare_and_write": false, 00:41:51.426 "abort": true, 00:41:51.426 "nvme_admin": false, 00:41:51.426 "nvme_io": false 00:41:51.426 }, 00:41:51.426 "memory_domains": [ 00:41:51.426 { 00:41:51.426 "dma_device_id": "system", 00:41:51.426 "dma_device_type": 1 00:41:51.426 }, 00:41:51.426 { 00:41:51.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:51.426 "dma_device_type": 2 00:41:51.426 } 00:41:51.426 ], 00:41:51.426 "driver_specific": {} 00:41:51.426 } 00:41:51.426 ] 00:41:51.426 19:35:07 -- common/autotest_common.sh@893 -- # return 0 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:51.426 19:35:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:51.684 19:35:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:51.684 "name": "Existed_Raid", 00:41:51.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:51.684 "strip_size_kb": 64, 00:41:51.684 "state": "configuring", 00:41:51.684 "raid_level": "raid5f", 00:41:51.684 "superblock": false, 00:41:51.684 "num_base_bdevs": 4, 00:41:51.684 "num_base_bdevs_discovered": 2, 00:41:51.684 "num_base_bdevs_operational": 4, 00:41:51.684 "base_bdevs_list": [ 00:41:51.684 { 00:41:51.684 "name": "BaseBdev1", 00:41:51.684 "uuid": "be3c283a-1ca8-48f7-97ee-976613e34b94", 00:41:51.684 "is_configured": true, 00:41:51.684 "data_offset": 0, 00:41:51.684 "data_size": 65536 00:41:51.684 }, 00:41:51.684 { 00:41:51.684 "name": "BaseBdev2", 00:41:51.684 "uuid": "01d8d2a5-417f-4317-8083-07b755f01b27", 00:41:51.684 "is_configured": true, 00:41:51.684 "data_offset": 0, 00:41:51.684 "data_size": 65536 00:41:51.684 }, 00:41:51.684 { 00:41:51.684 "name": "BaseBdev3", 00:41:51.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:51.684 "is_configured": false, 00:41:51.684 "data_offset": 0, 00:41:51.684 "data_size": 0 00:41:51.684 }, 00:41:51.684 { 00:41:51.684 "name": "BaseBdev4", 00:41:51.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:51.684 "is_configured": false, 00:41:51.684 "data_offset": 0, 00:41:51.684 "data_size": 0 00:41:51.684 } 00:41:51.684 ] 00:41:51.684 }' 00:41:51.684 19:35:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:51.684 19:35:07 -- common/autotest_common.sh@10 -- # set +x 00:41:52.249 19:35:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:41:52.507 [2024-04-18 19:35:08.380439] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:52.507 BaseBdev3 00:41:52.507 19:35:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:41:52.507 19:35:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:41:52.507 19:35:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:41:52.507 19:35:08 -- common/autotest_common.sh@887 -- # local i 00:41:52.507 19:35:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:41:52.507 19:35:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:41:52.507 19:35:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:41:52.765 19:35:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:41:53.023 [ 00:41:53.023 { 00:41:53.023 "name": "BaseBdev3", 00:41:53.023 "aliases": [ 00:41:53.023 "0513230f-a890-4444-bef9-3abe88a00216" 00:41:53.023 ], 00:41:53.023 "product_name": "Malloc disk", 00:41:53.023 "block_size": 512, 00:41:53.023 "num_blocks": 65536, 00:41:53.023 "uuid": "0513230f-a890-4444-bef9-3abe88a00216", 00:41:53.023 "assigned_rate_limits": { 00:41:53.023 "rw_ios_per_sec": 0, 00:41:53.023 "rw_mbytes_per_sec": 0, 00:41:53.023 "r_mbytes_per_sec": 0, 00:41:53.023 "w_mbytes_per_sec": 0 00:41:53.023 }, 00:41:53.023 "claimed": true, 00:41:53.023 "claim_type": "exclusive_write", 00:41:53.023 "zoned": false, 00:41:53.023 "supported_io_types": { 00:41:53.023 "read": true, 00:41:53.023 "write": true, 00:41:53.023 "unmap": true, 00:41:53.023 "write_zeroes": true, 00:41:53.023 "flush": true, 00:41:53.023 "reset": true, 00:41:53.023 "compare": false, 00:41:53.023 "compare_and_write": false, 00:41:53.023 "abort": true, 00:41:53.023 "nvme_admin": false, 00:41:53.023 "nvme_io": false 00:41:53.023 }, 00:41:53.023 "memory_domains": [ 00:41:53.023 { 00:41:53.023 "dma_device_id": "system", 00:41:53.023 "dma_device_type": 1 00:41:53.023 }, 00:41:53.023 { 00:41:53.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:53.023 "dma_device_type": 2 00:41:53.023 } 00:41:53.023 ], 00:41:53.023 "driver_specific": {} 00:41:53.023 } 00:41:53.023 ] 00:41:53.023 19:35:08 -- common/autotest_common.sh@893 -- # return 0 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:53.023 19:35:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:53.281 19:35:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:53.281 "name": "Existed_Raid", 00:41:53.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:53.281 "strip_size_kb": 64, 00:41:53.281 "state": "configuring", 00:41:53.281 "raid_level": "raid5f", 00:41:53.281 "superblock": false, 00:41:53.281 "num_base_bdevs": 4, 00:41:53.281 "num_base_bdevs_discovered": 3, 00:41:53.281 "num_base_bdevs_operational": 4, 00:41:53.281 "base_bdevs_list": [ 00:41:53.281 { 00:41:53.281 "name": "BaseBdev1", 00:41:53.281 "uuid": "be3c283a-1ca8-48f7-97ee-976613e34b94", 00:41:53.281 "is_configured": true, 00:41:53.281 "data_offset": 0, 00:41:53.281 "data_size": 65536 00:41:53.281 }, 00:41:53.281 { 00:41:53.281 "name": "BaseBdev2", 00:41:53.281 "uuid": "01d8d2a5-417f-4317-8083-07b755f01b27", 00:41:53.281 "is_configured": true, 00:41:53.281 "data_offset": 0, 00:41:53.281 "data_size": 65536 00:41:53.281 }, 00:41:53.281 { 00:41:53.281 "name": "BaseBdev3", 00:41:53.281 "uuid": "0513230f-a890-4444-bef9-3abe88a00216", 00:41:53.281 "is_configured": true, 00:41:53.281 "data_offset": 0, 00:41:53.281 "data_size": 65536 00:41:53.281 }, 00:41:53.281 { 00:41:53.281 "name": "BaseBdev4", 00:41:53.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:53.281 "is_configured": false, 00:41:53.281 "data_offset": 0, 00:41:53.281 "data_size": 0 00:41:53.281 } 00:41:53.281 ] 00:41:53.281 }' 00:41:53.281 19:35:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:53.281 19:35:09 -- common/autotest_common.sh@10 -- # set +x 00:41:54.215 19:35:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:41:54.215 [2024-04-18 19:35:10.105733] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:54.215 [2024-04-18 19:35:10.105802] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:41:54.215 [2024-04-18 19:35:10.105811] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:41:54.215 [2024-04-18 19:35:10.105932] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:41:54.215 [2024-04-18 19:35:10.114239] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:41:54.215 [2024-04-18 19:35:10.114267] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:41:54.216 [2024-04-18 19:35:10.114547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:54.216 BaseBdev4 00:41:54.216 19:35:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:41:54.216 19:35:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:41:54.216 19:35:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:41:54.216 19:35:10 -- common/autotest_common.sh@887 -- # local i 00:41:54.216 19:35:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:41:54.216 19:35:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:41:54.216 19:35:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:41:54.474 19:35:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:41:54.732 [ 00:41:54.732 { 00:41:54.732 "name": "BaseBdev4", 00:41:54.732 "aliases": [ 00:41:54.732 "b4de2546-73ab-469d-896c-6a9eeab3c76f" 00:41:54.732 ], 00:41:54.732 "product_name": "Malloc disk", 00:41:54.732 "block_size": 512, 00:41:54.732 "num_blocks": 65536, 00:41:54.732 "uuid": "b4de2546-73ab-469d-896c-6a9eeab3c76f", 00:41:54.732 "assigned_rate_limits": { 00:41:54.732 "rw_ios_per_sec": 0, 00:41:54.732 "rw_mbytes_per_sec": 0, 00:41:54.732 "r_mbytes_per_sec": 0, 00:41:54.732 "w_mbytes_per_sec": 0 00:41:54.732 }, 00:41:54.732 "claimed": true, 00:41:54.732 "claim_type": "exclusive_write", 00:41:54.732 "zoned": false, 00:41:54.732 "supported_io_types": { 00:41:54.732 "read": true, 00:41:54.732 "write": true, 00:41:54.732 "unmap": true, 00:41:54.732 "write_zeroes": true, 00:41:54.732 "flush": true, 00:41:54.732 "reset": true, 00:41:54.732 "compare": false, 00:41:54.732 "compare_and_write": false, 00:41:54.732 "abort": true, 00:41:54.732 "nvme_admin": false, 00:41:54.732 "nvme_io": false 00:41:54.732 }, 00:41:54.732 "memory_domains": [ 00:41:54.732 { 00:41:54.732 "dma_device_id": "system", 00:41:54.732 "dma_device_type": 1 00:41:54.732 }, 00:41:54.732 { 00:41:54.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:54.732 "dma_device_type": 2 00:41:54.732 } 00:41:54.732 ], 00:41:54.732 "driver_specific": {} 00:41:54.732 } 00:41:54.732 ] 00:41:54.732 19:35:10 -- common/autotest_common.sh@893 -- # return 0 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:54.732 19:35:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:54.990 19:35:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:54.990 "name": "Existed_Raid", 00:41:54.990 "uuid": "d15daf2a-9179-4fae-90d5-58612af570f0", 00:41:54.990 "strip_size_kb": 64, 00:41:54.990 "state": "online", 00:41:54.990 "raid_level": "raid5f", 00:41:54.990 "superblock": false, 00:41:54.990 "num_base_bdevs": 4, 00:41:54.990 "num_base_bdevs_discovered": 4, 00:41:54.990 "num_base_bdevs_operational": 4, 00:41:54.990 "base_bdevs_list": [ 00:41:54.990 { 00:41:54.990 "name": "BaseBdev1", 00:41:54.990 "uuid": "be3c283a-1ca8-48f7-97ee-976613e34b94", 00:41:54.990 "is_configured": true, 00:41:54.990 "data_offset": 0, 00:41:54.990 "data_size": 65536 00:41:54.990 }, 00:41:54.990 { 00:41:54.990 "name": "BaseBdev2", 00:41:54.990 "uuid": "01d8d2a5-417f-4317-8083-07b755f01b27", 00:41:54.990 "is_configured": true, 00:41:54.990 "data_offset": 0, 00:41:54.990 "data_size": 65536 00:41:54.990 }, 00:41:54.990 { 00:41:54.990 "name": "BaseBdev3", 00:41:54.990 "uuid": "0513230f-a890-4444-bef9-3abe88a00216", 00:41:54.990 "is_configured": true, 00:41:54.990 "data_offset": 0, 00:41:54.990 "data_size": 65536 00:41:54.990 }, 00:41:54.990 { 00:41:54.990 "name": "BaseBdev4", 00:41:54.990 "uuid": "b4de2546-73ab-469d-896c-6a9eeab3c76f", 00:41:54.990 "is_configured": true, 00:41:54.990 "data_offset": 0, 00:41:54.990 "data_size": 65536 00:41:54.990 } 00:41:54.990 ] 00:41:54.990 }' 00:41:54.990 19:35:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:54.990 19:35:10 -- common/autotest_common.sh@10 -- # set +x 00:41:55.557 19:35:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:41:55.816 [2024-04-18 19:35:11.655649] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@196 -- # return 0 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:56.074 19:35:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:56.074 "name": "Existed_Raid", 00:41:56.074 "uuid": "d15daf2a-9179-4fae-90d5-58612af570f0", 00:41:56.074 "strip_size_kb": 64, 00:41:56.074 "state": "online", 00:41:56.074 "raid_level": "raid5f", 00:41:56.074 "superblock": false, 00:41:56.074 "num_base_bdevs": 4, 00:41:56.074 "num_base_bdevs_discovered": 3, 00:41:56.074 "num_base_bdevs_operational": 3, 00:41:56.074 "base_bdevs_list": [ 00:41:56.074 { 00:41:56.074 "name": null, 00:41:56.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:56.074 "is_configured": false, 00:41:56.074 "data_offset": 0, 00:41:56.074 "data_size": 65536 00:41:56.074 }, 00:41:56.074 { 00:41:56.074 "name": "BaseBdev2", 00:41:56.074 "uuid": "01d8d2a5-417f-4317-8083-07b755f01b27", 00:41:56.074 "is_configured": true, 00:41:56.075 "data_offset": 0, 00:41:56.075 "data_size": 65536 00:41:56.075 }, 00:41:56.075 { 00:41:56.075 "name": "BaseBdev3", 00:41:56.075 "uuid": "0513230f-a890-4444-bef9-3abe88a00216", 00:41:56.075 "is_configured": true, 00:41:56.075 "data_offset": 0, 00:41:56.075 "data_size": 65536 00:41:56.075 }, 00:41:56.075 { 00:41:56.075 "name": "BaseBdev4", 00:41:56.075 "uuid": "b4de2546-73ab-469d-896c-6a9eeab3c76f", 00:41:56.075 "is_configured": true, 00:41:56.075 "data_offset": 0, 00:41:56.075 "data_size": 65536 00:41:56.075 } 00:41:56.075 ] 00:41:56.075 }' 00:41:56.075 19:35:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:56.075 19:35:11 -- common/autotest_common.sh@10 -- # set +x 00:41:57.009 19:35:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:41:57.009 19:35:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:41:57.009 19:35:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:57.009 19:35:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:41:57.009 19:35:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:41:57.009 19:35:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:57.009 19:35:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:41:57.267 [2024-04-18 19:35:13.086689] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:57.267 [2024-04-18 19:35:13.086791] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:57.267 [2024-04-18 19:35:13.191101] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:57.523 19:35:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:41:57.523 19:35:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:41:57.523 19:35:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:57.523 19:35:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:41:57.781 19:35:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:41:57.781 19:35:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:57.781 19:35:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:41:58.039 [2024-04-18 19:35:13.759411] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:41:58.039 19:35:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:41:58.039 19:35:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:41:58.039 19:35:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:58.039 19:35:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:41:58.297 19:35:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:41:58.297 19:35:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:58.297 19:35:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:41:58.554 [2024-04-18 19:35:14.335922] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:41:58.554 [2024-04-18 19:35:14.335992] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:41:58.554 19:35:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:41:58.554 19:35:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:41:58.554 19:35:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:58.554 19:35:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:41:58.811 19:35:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:41:58.811 19:35:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:41:58.811 19:35:14 -- bdev/bdev_raid.sh@287 -- # killprocess 140513 00:41:58.811 19:35:14 -- common/autotest_common.sh@936 -- # '[' -z 140513 ']' 00:41:58.811 19:35:14 -- common/autotest_common.sh@940 -- # kill -0 140513 00:41:58.811 19:35:14 -- common/autotest_common.sh@941 -- # uname 00:41:59.069 19:35:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:41:59.069 19:35:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140513 00:41:59.069 killing process with pid 140513 00:41:59.069 19:35:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:41:59.069 19:35:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:41:59.069 19:35:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140513' 00:41:59.069 19:35:14 -- common/autotest_common.sh@955 -- # kill 140513 00:41:59.069 19:35:14 -- common/autotest_common.sh@960 -- # wait 140513 00:41:59.069 [2024-04-18 19:35:14.755061] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:59.069 [2024-04-18 19:35:14.755196] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:00.446 ************************************ 00:42:00.446 END TEST raid5f_state_function_test 00:42:00.446 ************************************ 00:42:00.446 19:35:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:42:00.446 00:42:00.446 real 0m15.570s 00:42:00.446 user 0m27.365s 00:42:00.446 sys 0m1.904s 00:42:00.446 19:35:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:00.447 19:35:16 -- common/autotest_common.sh@10 -- # set +x 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:42:00.447 19:35:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:42:00.447 19:35:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:00.447 19:35:16 -- common/autotest_common.sh@10 -- # set +x 00:42:00.447 ************************************ 00:42:00.447 START TEST raid5f_state_function_test_sb 00:42:00.447 ************************************ 00:42:00.447 19:35:16 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 4 true 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=141001 00:42:00.447 Process raid pid: 141001 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 141001' 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 141001 /var/tmp/spdk-raid.sock 00:42:00.447 19:35:16 -- common/autotest_common.sh@817 -- # '[' -z 141001 ']' 00:42:00.447 19:35:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:42:00.447 19:35:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:42:00.447 19:35:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:42:00.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:42:00.447 19:35:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:42:00.447 19:35:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:42:00.447 19:35:16 -- common/autotest_common.sh@10 -- # set +x 00:42:00.447 [2024-04-18 19:35:16.369560] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:42:00.447 [2024-04-18 19:35:16.369704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:00.705 [2024-04-18 19:35:16.526445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:00.964 [2024-04-18 19:35:16.792489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:01.230 [2024-04-18 19:35:17.036103] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:01.489 19:35:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:42:01.489 19:35:17 -- common/autotest_common.sh@850 -- # return 0 00:42:01.489 19:35:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:42:01.489 [2024-04-18 19:35:17.406061] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:01.489 [2024-04-18 19:35:17.406556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:01.489 [2024-04-18 19:35:17.406588] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:01.489 [2024-04-18 19:35:17.406701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:01.489 [2024-04-18 19:35:17.406723] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:42:01.489 [2024-04-18 19:35:17.406839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:42:01.489 [2024-04-18 19:35:17.406860] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:42:01.489 [2024-04-18 19:35:17.406957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:01.748 19:35:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:02.007 19:35:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:02.007 "name": "Existed_Raid", 00:42:02.007 "uuid": "11ef7ea4-941c-4951-96d2-a955fdf5ea0e", 00:42:02.007 "strip_size_kb": 64, 00:42:02.007 "state": "configuring", 00:42:02.007 "raid_level": "raid5f", 00:42:02.007 "superblock": true, 00:42:02.007 "num_base_bdevs": 4, 00:42:02.007 "num_base_bdevs_discovered": 0, 00:42:02.007 "num_base_bdevs_operational": 4, 00:42:02.007 "base_bdevs_list": [ 00:42:02.007 { 00:42:02.007 "name": "BaseBdev1", 00:42:02.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:02.007 "is_configured": false, 00:42:02.007 "data_offset": 0, 00:42:02.007 "data_size": 0 00:42:02.007 }, 00:42:02.007 { 00:42:02.007 "name": "BaseBdev2", 00:42:02.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:02.007 "is_configured": false, 00:42:02.007 "data_offset": 0, 00:42:02.007 "data_size": 0 00:42:02.007 }, 00:42:02.007 { 00:42:02.007 "name": "BaseBdev3", 00:42:02.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:02.007 "is_configured": false, 00:42:02.007 "data_offset": 0, 00:42:02.007 "data_size": 0 00:42:02.007 }, 00:42:02.007 { 00:42:02.007 "name": "BaseBdev4", 00:42:02.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:02.007 "is_configured": false, 00:42:02.007 "data_offset": 0, 00:42:02.007 "data_size": 0 00:42:02.007 } 00:42:02.007 ] 00:42:02.007 }' 00:42:02.007 19:35:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:02.007 19:35:17 -- common/autotest_common.sh@10 -- # set +x 00:42:02.573 19:35:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:42:02.831 [2024-04-18 19:35:18.518151] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:02.831 [2024-04-18 19:35:18.518197] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:42:02.831 19:35:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:42:02.831 [2024-04-18 19:35:18.734305] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:02.831 [2024-04-18 19:35:18.734824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:02.831 [2024-04-18 19:35:18.734855] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:02.831 [2024-04-18 19:35:18.734980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:02.831 [2024-04-18 19:35:18.735003] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:42:02.831 [2024-04-18 19:35:18.735130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:42:02.831 [2024-04-18 19:35:18.735150] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:42:02.831 [2024-04-18 19:35:18.735252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:42:02.831 19:35:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:42:03.089 [2024-04-18 19:35:18.999672] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:03.089 BaseBdev1 00:42:03.347 19:35:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:42:03.347 19:35:19 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:42:03.347 19:35:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:42:03.347 19:35:19 -- common/autotest_common.sh@887 -- # local i 00:42:03.347 19:35:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:42:03.347 19:35:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:42:03.347 19:35:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:42:03.347 19:35:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:42:03.914 [ 00:42:03.914 { 00:42:03.914 "name": "BaseBdev1", 00:42:03.914 "aliases": [ 00:42:03.914 "1060bf25-a642-4b7e-ae81-2cd8059c5c61" 00:42:03.914 ], 00:42:03.914 "product_name": "Malloc disk", 00:42:03.914 "block_size": 512, 00:42:03.914 "num_blocks": 65536, 00:42:03.914 "uuid": "1060bf25-a642-4b7e-ae81-2cd8059c5c61", 00:42:03.914 "assigned_rate_limits": { 00:42:03.914 "rw_ios_per_sec": 0, 00:42:03.914 "rw_mbytes_per_sec": 0, 00:42:03.914 "r_mbytes_per_sec": 0, 00:42:03.914 "w_mbytes_per_sec": 0 00:42:03.914 }, 00:42:03.914 "claimed": true, 00:42:03.914 "claim_type": "exclusive_write", 00:42:03.914 "zoned": false, 00:42:03.914 "supported_io_types": { 00:42:03.914 "read": true, 00:42:03.914 "write": true, 00:42:03.914 "unmap": true, 00:42:03.914 "write_zeroes": true, 00:42:03.914 "flush": true, 00:42:03.914 "reset": true, 00:42:03.914 "compare": false, 00:42:03.914 "compare_and_write": false, 00:42:03.914 "abort": true, 00:42:03.914 "nvme_admin": false, 00:42:03.914 "nvme_io": false 00:42:03.914 }, 00:42:03.914 "memory_domains": [ 00:42:03.914 { 00:42:03.914 "dma_device_id": "system", 00:42:03.914 "dma_device_type": 1 00:42:03.914 }, 00:42:03.914 { 00:42:03.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:03.914 "dma_device_type": 2 00:42:03.914 } 00:42:03.914 ], 00:42:03.914 "driver_specific": {} 00:42:03.914 } 00:42:03.914 ] 00:42:03.914 19:35:19 -- common/autotest_common.sh@893 -- # return 0 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:03.914 "name": "Existed_Raid", 00:42:03.914 "uuid": "ef903dbd-6f22-4cec-936f-73cbbe0168ed", 00:42:03.914 "strip_size_kb": 64, 00:42:03.914 "state": "configuring", 00:42:03.914 "raid_level": "raid5f", 00:42:03.914 "superblock": true, 00:42:03.914 "num_base_bdevs": 4, 00:42:03.914 "num_base_bdevs_discovered": 1, 00:42:03.914 "num_base_bdevs_operational": 4, 00:42:03.914 "base_bdevs_list": [ 00:42:03.914 { 00:42:03.914 "name": "BaseBdev1", 00:42:03.914 "uuid": "1060bf25-a642-4b7e-ae81-2cd8059c5c61", 00:42:03.914 "is_configured": true, 00:42:03.914 "data_offset": 2048, 00:42:03.914 "data_size": 63488 00:42:03.914 }, 00:42:03.914 { 00:42:03.914 "name": "BaseBdev2", 00:42:03.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:03.914 "is_configured": false, 00:42:03.914 "data_offset": 0, 00:42:03.914 "data_size": 0 00:42:03.914 }, 00:42:03.914 { 00:42:03.914 "name": "BaseBdev3", 00:42:03.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:03.914 "is_configured": false, 00:42:03.914 "data_offset": 0, 00:42:03.914 "data_size": 0 00:42:03.914 }, 00:42:03.914 { 00:42:03.914 "name": "BaseBdev4", 00:42:03.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:03.914 "is_configured": false, 00:42:03.914 "data_offset": 0, 00:42:03.914 "data_size": 0 00:42:03.914 } 00:42:03.914 ] 00:42:03.914 }' 00:42:03.914 19:35:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:03.914 19:35:19 -- common/autotest_common.sh@10 -- # set +x 00:42:04.852 19:35:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:42:04.852 [2024-04-18 19:35:20.701463] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:04.852 [2024-04-18 19:35:20.701533] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:42:04.852 19:35:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:42:04.852 19:35:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:42:05.111 19:35:21 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:42:05.679 BaseBdev1 00:42:05.679 19:35:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:42:05.679 19:35:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:42:05.679 19:35:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:42:05.679 19:35:21 -- common/autotest_common.sh@887 -- # local i 00:42:05.679 19:35:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:42:05.679 19:35:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:42:05.679 19:35:21 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:42:05.679 19:35:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:42:05.938 [ 00:42:05.938 { 00:42:05.938 "name": "BaseBdev1", 00:42:05.938 "aliases": [ 00:42:05.938 "0b863551-54e4-4b96-8e5f-041d475392cc" 00:42:05.938 ], 00:42:05.938 "product_name": "Malloc disk", 00:42:05.938 "block_size": 512, 00:42:05.938 "num_blocks": 65536, 00:42:05.938 "uuid": "0b863551-54e4-4b96-8e5f-041d475392cc", 00:42:05.938 "assigned_rate_limits": { 00:42:05.938 "rw_ios_per_sec": 0, 00:42:05.938 "rw_mbytes_per_sec": 0, 00:42:05.938 "r_mbytes_per_sec": 0, 00:42:05.938 "w_mbytes_per_sec": 0 00:42:05.938 }, 00:42:05.938 "claimed": false, 00:42:05.938 "zoned": false, 00:42:05.938 "supported_io_types": { 00:42:05.938 "read": true, 00:42:05.938 "write": true, 00:42:05.938 "unmap": true, 00:42:05.938 "write_zeroes": true, 00:42:05.938 "flush": true, 00:42:05.938 "reset": true, 00:42:05.938 "compare": false, 00:42:05.938 "compare_and_write": false, 00:42:05.938 "abort": true, 00:42:05.938 "nvme_admin": false, 00:42:05.938 "nvme_io": false 00:42:05.938 }, 00:42:05.938 "memory_domains": [ 00:42:05.938 { 00:42:05.938 "dma_device_id": "system", 00:42:05.938 "dma_device_type": 1 00:42:05.938 }, 00:42:05.938 { 00:42:05.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:05.938 "dma_device_type": 2 00:42:05.938 } 00:42:05.938 ], 00:42:05.938 "driver_specific": {} 00:42:05.938 } 00:42:05.938 ] 00:42:05.938 19:35:21 -- common/autotest_common.sh@893 -- # return 0 00:42:05.938 19:35:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:42:06.197 [2024-04-18 19:35:21.922750] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:06.197 [2024-04-18 19:35:21.924897] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:06.197 [2024-04-18 19:35:21.925520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:06.197 [2024-04-18 19:35:21.925552] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:42:06.197 [2024-04-18 19:35:21.925680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:42:06.197 [2024-04-18 19:35:21.925694] bdev.c:8066:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:42:06.197 [2024-04-18 19:35:21.925787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:06.197 19:35:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:06.455 19:35:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:06.455 "name": "Existed_Raid", 00:42:06.455 "uuid": "8f544fa0-dea6-409e-9af7-8245d0379440", 00:42:06.455 "strip_size_kb": 64, 00:42:06.455 "state": "configuring", 00:42:06.455 "raid_level": "raid5f", 00:42:06.455 "superblock": true, 00:42:06.455 "num_base_bdevs": 4, 00:42:06.455 "num_base_bdevs_discovered": 1, 00:42:06.455 "num_base_bdevs_operational": 4, 00:42:06.455 "base_bdevs_list": [ 00:42:06.455 { 00:42:06.455 "name": "BaseBdev1", 00:42:06.455 "uuid": "0b863551-54e4-4b96-8e5f-041d475392cc", 00:42:06.455 "is_configured": true, 00:42:06.455 "data_offset": 2048, 00:42:06.455 "data_size": 63488 00:42:06.455 }, 00:42:06.455 { 00:42:06.455 "name": "BaseBdev2", 00:42:06.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:06.455 "is_configured": false, 00:42:06.455 "data_offset": 0, 00:42:06.455 "data_size": 0 00:42:06.455 }, 00:42:06.455 { 00:42:06.455 "name": "BaseBdev3", 00:42:06.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:06.455 "is_configured": false, 00:42:06.455 "data_offset": 0, 00:42:06.455 "data_size": 0 00:42:06.455 }, 00:42:06.455 { 00:42:06.455 "name": "BaseBdev4", 00:42:06.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:06.455 "is_configured": false, 00:42:06.455 "data_offset": 0, 00:42:06.455 "data_size": 0 00:42:06.455 } 00:42:06.455 ] 00:42:06.455 }' 00:42:06.455 19:35:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:06.455 19:35:22 -- common/autotest_common.sh@10 -- # set +x 00:42:07.043 19:35:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:42:07.301 [2024-04-18 19:35:23.105556] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:07.301 BaseBdev2 00:42:07.301 19:35:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:42:07.301 19:35:23 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:42:07.301 19:35:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:42:07.301 19:35:23 -- common/autotest_common.sh@887 -- # local i 00:42:07.301 19:35:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:42:07.301 19:35:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:42:07.301 19:35:23 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:42:07.559 19:35:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:42:07.816 [ 00:42:07.816 { 00:42:07.816 "name": "BaseBdev2", 00:42:07.816 "aliases": [ 00:42:07.816 "bdac8bcc-d373-4b0d-bd68-747cf7d396bd" 00:42:07.816 ], 00:42:07.816 "product_name": "Malloc disk", 00:42:07.816 "block_size": 512, 00:42:07.816 "num_blocks": 65536, 00:42:07.816 "uuid": "bdac8bcc-d373-4b0d-bd68-747cf7d396bd", 00:42:07.816 "assigned_rate_limits": { 00:42:07.816 "rw_ios_per_sec": 0, 00:42:07.816 "rw_mbytes_per_sec": 0, 00:42:07.816 "r_mbytes_per_sec": 0, 00:42:07.816 "w_mbytes_per_sec": 0 00:42:07.816 }, 00:42:07.816 "claimed": true, 00:42:07.816 "claim_type": "exclusive_write", 00:42:07.816 "zoned": false, 00:42:07.816 "supported_io_types": { 00:42:07.816 "read": true, 00:42:07.816 "write": true, 00:42:07.816 "unmap": true, 00:42:07.816 "write_zeroes": true, 00:42:07.816 "flush": true, 00:42:07.816 "reset": true, 00:42:07.816 "compare": false, 00:42:07.816 "compare_and_write": false, 00:42:07.816 "abort": true, 00:42:07.816 "nvme_admin": false, 00:42:07.816 "nvme_io": false 00:42:07.816 }, 00:42:07.816 "memory_domains": [ 00:42:07.816 { 00:42:07.816 "dma_device_id": "system", 00:42:07.816 "dma_device_type": 1 00:42:07.816 }, 00:42:07.816 { 00:42:07.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:07.816 "dma_device_type": 2 00:42:07.816 } 00:42:07.816 ], 00:42:07.816 "driver_specific": {} 00:42:07.816 } 00:42:07.816 ] 00:42:07.816 19:35:23 -- common/autotest_common.sh@893 -- # return 0 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:07.816 19:35:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:08.074 19:35:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:08.074 "name": "Existed_Raid", 00:42:08.074 "uuid": "8f544fa0-dea6-409e-9af7-8245d0379440", 00:42:08.074 "strip_size_kb": 64, 00:42:08.074 "state": "configuring", 00:42:08.074 "raid_level": "raid5f", 00:42:08.074 "superblock": true, 00:42:08.074 "num_base_bdevs": 4, 00:42:08.074 "num_base_bdevs_discovered": 2, 00:42:08.074 "num_base_bdevs_operational": 4, 00:42:08.074 "base_bdevs_list": [ 00:42:08.074 { 00:42:08.074 "name": "BaseBdev1", 00:42:08.074 "uuid": "0b863551-54e4-4b96-8e5f-041d475392cc", 00:42:08.074 "is_configured": true, 00:42:08.074 "data_offset": 2048, 00:42:08.074 "data_size": 63488 00:42:08.074 }, 00:42:08.074 { 00:42:08.074 "name": "BaseBdev2", 00:42:08.074 "uuid": "bdac8bcc-d373-4b0d-bd68-747cf7d396bd", 00:42:08.074 "is_configured": true, 00:42:08.074 "data_offset": 2048, 00:42:08.074 "data_size": 63488 00:42:08.074 }, 00:42:08.074 { 00:42:08.074 "name": "BaseBdev3", 00:42:08.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:08.074 "is_configured": false, 00:42:08.074 "data_offset": 0, 00:42:08.074 "data_size": 0 00:42:08.074 }, 00:42:08.074 { 00:42:08.074 "name": "BaseBdev4", 00:42:08.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:08.074 "is_configured": false, 00:42:08.074 "data_offset": 0, 00:42:08.074 "data_size": 0 00:42:08.074 } 00:42:08.074 ] 00:42:08.074 }' 00:42:08.074 19:35:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:08.074 19:35:23 -- common/autotest_common.sh@10 -- # set +x 00:42:08.639 19:35:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:42:08.897 [2024-04-18 19:35:24.676065] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:08.897 BaseBdev3 00:42:08.897 19:35:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:42:08.897 19:35:24 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:42:08.897 19:35:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:42:08.897 19:35:24 -- common/autotest_common.sh@887 -- # local i 00:42:08.897 19:35:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:42:08.897 19:35:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:42:08.897 19:35:24 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:42:09.155 19:35:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:42:09.413 [ 00:42:09.413 { 00:42:09.413 "name": "BaseBdev3", 00:42:09.413 "aliases": [ 00:42:09.413 "3b831448-702d-4684-aec4-33d626fb8dec" 00:42:09.413 ], 00:42:09.413 "product_name": "Malloc disk", 00:42:09.413 "block_size": 512, 00:42:09.413 "num_blocks": 65536, 00:42:09.413 "uuid": "3b831448-702d-4684-aec4-33d626fb8dec", 00:42:09.413 "assigned_rate_limits": { 00:42:09.413 "rw_ios_per_sec": 0, 00:42:09.413 "rw_mbytes_per_sec": 0, 00:42:09.413 "r_mbytes_per_sec": 0, 00:42:09.413 "w_mbytes_per_sec": 0 00:42:09.413 }, 00:42:09.413 "claimed": true, 00:42:09.413 "claim_type": "exclusive_write", 00:42:09.413 "zoned": false, 00:42:09.413 "supported_io_types": { 00:42:09.413 "read": true, 00:42:09.413 "write": true, 00:42:09.413 "unmap": true, 00:42:09.413 "write_zeroes": true, 00:42:09.413 "flush": true, 00:42:09.413 "reset": true, 00:42:09.413 "compare": false, 00:42:09.413 "compare_and_write": false, 00:42:09.413 "abort": true, 00:42:09.413 "nvme_admin": false, 00:42:09.413 "nvme_io": false 00:42:09.413 }, 00:42:09.413 "memory_domains": [ 00:42:09.413 { 00:42:09.413 "dma_device_id": "system", 00:42:09.413 "dma_device_type": 1 00:42:09.413 }, 00:42:09.413 { 00:42:09.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:09.413 "dma_device_type": 2 00:42:09.413 } 00:42:09.413 ], 00:42:09.413 "driver_specific": {} 00:42:09.413 } 00:42:09.413 ] 00:42:09.413 19:35:25 -- common/autotest_common.sh@893 -- # return 0 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:09.413 "name": "Existed_Raid", 00:42:09.413 "uuid": "8f544fa0-dea6-409e-9af7-8245d0379440", 00:42:09.413 "strip_size_kb": 64, 00:42:09.413 "state": "configuring", 00:42:09.413 "raid_level": "raid5f", 00:42:09.413 "superblock": true, 00:42:09.413 "num_base_bdevs": 4, 00:42:09.413 "num_base_bdevs_discovered": 3, 00:42:09.413 "num_base_bdevs_operational": 4, 00:42:09.413 "base_bdevs_list": [ 00:42:09.413 { 00:42:09.413 "name": "BaseBdev1", 00:42:09.413 "uuid": "0b863551-54e4-4b96-8e5f-041d475392cc", 00:42:09.413 "is_configured": true, 00:42:09.413 "data_offset": 2048, 00:42:09.413 "data_size": 63488 00:42:09.413 }, 00:42:09.413 { 00:42:09.413 "name": "BaseBdev2", 00:42:09.413 "uuid": "bdac8bcc-d373-4b0d-bd68-747cf7d396bd", 00:42:09.413 "is_configured": true, 00:42:09.413 "data_offset": 2048, 00:42:09.413 "data_size": 63488 00:42:09.413 }, 00:42:09.413 { 00:42:09.413 "name": "BaseBdev3", 00:42:09.413 "uuid": "3b831448-702d-4684-aec4-33d626fb8dec", 00:42:09.413 "is_configured": true, 00:42:09.413 "data_offset": 2048, 00:42:09.413 "data_size": 63488 00:42:09.413 }, 00:42:09.413 { 00:42:09.413 "name": "BaseBdev4", 00:42:09.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:09.413 "is_configured": false, 00:42:09.413 "data_offset": 0, 00:42:09.413 "data_size": 0 00:42:09.413 } 00:42:09.413 ] 00:42:09.413 }' 00:42:09.413 19:35:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:09.413 19:35:25 -- common/autotest_common.sh@10 -- # set +x 00:42:10.346 19:35:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:42:10.604 [2024-04-18 19:35:26.401491] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:10.604 [2024-04-18 19:35:26.401973] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:42:10.604 [2024-04-18 19:35:26.402099] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:42:10.604 [2024-04-18 19:35:26.402284] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:42:10.604 BaseBdev4 00:42:10.604 [2024-04-18 19:35:26.409914] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:42:10.604 [2024-04-18 19:35:26.410075] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:42:10.604 [2024-04-18 19:35:26.410350] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:10.604 19:35:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:42:10.604 19:35:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:42:10.604 19:35:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:42:10.604 19:35:26 -- common/autotest_common.sh@887 -- # local i 00:42:10.604 19:35:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:42:10.604 19:35:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:42:10.604 19:35:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:42:10.863 19:35:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:42:11.122 [ 00:42:11.122 { 00:42:11.122 "name": "BaseBdev4", 00:42:11.122 "aliases": [ 00:42:11.122 "f31e8eca-906e-45ec-be67-f62046cbc56d" 00:42:11.122 ], 00:42:11.122 "product_name": "Malloc disk", 00:42:11.122 "block_size": 512, 00:42:11.122 "num_blocks": 65536, 00:42:11.122 "uuid": "f31e8eca-906e-45ec-be67-f62046cbc56d", 00:42:11.122 "assigned_rate_limits": { 00:42:11.122 "rw_ios_per_sec": 0, 00:42:11.122 "rw_mbytes_per_sec": 0, 00:42:11.122 "r_mbytes_per_sec": 0, 00:42:11.122 "w_mbytes_per_sec": 0 00:42:11.122 }, 00:42:11.122 "claimed": true, 00:42:11.122 "claim_type": "exclusive_write", 00:42:11.122 "zoned": false, 00:42:11.122 "supported_io_types": { 00:42:11.122 "read": true, 00:42:11.122 "write": true, 00:42:11.122 "unmap": true, 00:42:11.122 "write_zeroes": true, 00:42:11.122 "flush": true, 00:42:11.122 "reset": true, 00:42:11.122 "compare": false, 00:42:11.122 "compare_and_write": false, 00:42:11.122 "abort": true, 00:42:11.122 "nvme_admin": false, 00:42:11.122 "nvme_io": false 00:42:11.122 }, 00:42:11.122 "memory_domains": [ 00:42:11.122 { 00:42:11.122 "dma_device_id": "system", 00:42:11.122 "dma_device_type": 1 00:42:11.122 }, 00:42:11.122 { 00:42:11.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:11.122 "dma_device_type": 2 00:42:11.122 } 00:42:11.122 ], 00:42:11.122 "driver_specific": {} 00:42:11.122 } 00:42:11.122 ] 00:42:11.122 19:35:26 -- common/autotest_common.sh@893 -- # return 0 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:11.122 19:35:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:11.381 19:35:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:11.381 "name": "Existed_Raid", 00:42:11.381 "uuid": "8f544fa0-dea6-409e-9af7-8245d0379440", 00:42:11.381 "strip_size_kb": 64, 00:42:11.381 "state": "online", 00:42:11.381 "raid_level": "raid5f", 00:42:11.381 "superblock": true, 00:42:11.381 "num_base_bdevs": 4, 00:42:11.381 "num_base_bdevs_discovered": 4, 00:42:11.381 "num_base_bdevs_operational": 4, 00:42:11.381 "base_bdevs_list": [ 00:42:11.381 { 00:42:11.381 "name": "BaseBdev1", 00:42:11.381 "uuid": "0b863551-54e4-4b96-8e5f-041d475392cc", 00:42:11.381 "is_configured": true, 00:42:11.381 "data_offset": 2048, 00:42:11.381 "data_size": 63488 00:42:11.381 }, 00:42:11.381 { 00:42:11.381 "name": "BaseBdev2", 00:42:11.381 "uuid": "bdac8bcc-d373-4b0d-bd68-747cf7d396bd", 00:42:11.381 "is_configured": true, 00:42:11.381 "data_offset": 2048, 00:42:11.381 "data_size": 63488 00:42:11.381 }, 00:42:11.381 { 00:42:11.381 "name": "BaseBdev3", 00:42:11.381 "uuid": "3b831448-702d-4684-aec4-33d626fb8dec", 00:42:11.381 "is_configured": true, 00:42:11.381 "data_offset": 2048, 00:42:11.381 "data_size": 63488 00:42:11.381 }, 00:42:11.381 { 00:42:11.381 "name": "BaseBdev4", 00:42:11.381 "uuid": "f31e8eca-906e-45ec-be67-f62046cbc56d", 00:42:11.381 "is_configured": true, 00:42:11.381 "data_offset": 2048, 00:42:11.381 "data_size": 63488 00:42:11.381 } 00:42:11.381 ] 00:42:11.381 }' 00:42:11.381 19:35:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:11.381 19:35:27 -- common/autotest_common.sh@10 -- # set +x 00:42:12.321 19:35:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:42:12.321 [2024-04-18 19:35:28.244386] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:12.579 19:35:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:12.838 19:35:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:12.838 "name": "Existed_Raid", 00:42:12.838 "uuid": "8f544fa0-dea6-409e-9af7-8245d0379440", 00:42:12.838 "strip_size_kb": 64, 00:42:12.838 "state": "online", 00:42:12.838 "raid_level": "raid5f", 00:42:12.838 "superblock": true, 00:42:12.838 "num_base_bdevs": 4, 00:42:12.838 "num_base_bdevs_discovered": 3, 00:42:12.838 "num_base_bdevs_operational": 3, 00:42:12.838 "base_bdevs_list": [ 00:42:12.838 { 00:42:12.838 "name": null, 00:42:12.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:12.838 "is_configured": false, 00:42:12.838 "data_offset": 2048, 00:42:12.838 "data_size": 63488 00:42:12.838 }, 00:42:12.838 { 00:42:12.838 "name": "BaseBdev2", 00:42:12.838 "uuid": "bdac8bcc-d373-4b0d-bd68-747cf7d396bd", 00:42:12.838 "is_configured": true, 00:42:12.838 "data_offset": 2048, 00:42:12.838 "data_size": 63488 00:42:12.838 }, 00:42:12.838 { 00:42:12.838 "name": "BaseBdev3", 00:42:12.838 "uuid": "3b831448-702d-4684-aec4-33d626fb8dec", 00:42:12.838 "is_configured": true, 00:42:12.838 "data_offset": 2048, 00:42:12.838 "data_size": 63488 00:42:12.838 }, 00:42:12.838 { 00:42:12.838 "name": "BaseBdev4", 00:42:12.838 "uuid": "f31e8eca-906e-45ec-be67-f62046cbc56d", 00:42:12.838 "is_configured": true, 00:42:12.838 "data_offset": 2048, 00:42:12.838 "data_size": 63488 00:42:12.838 } 00:42:12.838 ] 00:42:12.838 }' 00:42:12.838 19:35:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:12.838 19:35:28 -- common/autotest_common.sh@10 -- # set +x 00:42:13.774 19:35:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:42:13.774 19:35:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:42:13.774 19:35:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:13.774 19:35:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:42:14.032 19:35:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:42:14.032 19:35:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:42:14.032 19:35:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:42:14.032 [2024-04-18 19:35:29.941250] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:42:14.032 [2024-04-18 19:35:29.941949] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:14.291 [2024-04-18 19:35:30.048579] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:14.291 19:35:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:42:14.291 19:35:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:42:14.291 19:35:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:14.291 19:35:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:42:14.549 19:35:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:42:14.549 19:35:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:42:14.549 19:35:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:42:14.816 [2024-04-18 19:35:30.656892] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:42:15.089 19:35:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:42:15.089 19:35:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:42:15.089 19:35:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:42:15.089 19:35:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:15.346 19:35:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:42:15.346 19:35:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:42:15.346 19:35:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:42:15.605 [2024-04-18 19:35:31.335777] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:42:15.605 [2024-04-18 19:35:31.336039] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:42:15.605 19:35:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:42:15.605 19:35:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:42:15.605 19:35:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:15.605 19:35:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:42:15.864 19:35:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:42:15.864 19:35:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:42:15.864 19:35:31 -- bdev/bdev_raid.sh@287 -- # killprocess 141001 00:42:15.864 19:35:31 -- common/autotest_common.sh@936 -- # '[' -z 141001 ']' 00:42:15.864 19:35:31 -- common/autotest_common.sh@940 -- # kill -0 141001 00:42:15.864 19:35:31 -- common/autotest_common.sh@941 -- # uname 00:42:15.864 19:35:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:42:15.864 19:35:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141001 00:42:15.864 killing process with pid 141001 00:42:15.864 19:35:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:42:15.864 19:35:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:42:15.864 19:35:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141001' 00:42:15.864 19:35:31 -- common/autotest_common.sh@955 -- # kill 141001 00:42:15.864 19:35:31 -- common/autotest_common.sh@960 -- # wait 141001 00:42:15.864 [2024-04-18 19:35:31.693255] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:15.864 [2024-04-18 19:35:31.693379] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:17.240 ************************************ 00:42:17.240 END TEST raid5f_state_function_test_sb 00:42:17.240 ************************************ 00:42:17.240 19:35:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:42:17.240 00:42:17.240 real 0m16.804s 00:42:17.240 user 0m29.476s 00:42:17.240 sys 0m2.084s 00:42:17.240 19:35:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:17.240 19:35:33 -- common/autotest_common.sh@10 -- # set +x 00:42:17.240 19:35:33 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:42:17.240 19:35:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:42:17.240 19:35:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:17.240 19:35:33 -- common/autotest_common.sh@10 -- # set +x 00:42:17.498 ************************************ 00:42:17.498 START TEST raid5f_superblock_test 00:42:17.498 ************************************ 00:42:17.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:42:17.498 19:35:33 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid5f 4 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@357 -- # raid_pid=141489 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@358 -- # waitforlisten 141489 /var/tmp/spdk-raid.sock 00:42:17.498 19:35:33 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:42:17.498 19:35:33 -- common/autotest_common.sh@817 -- # '[' -z 141489 ']' 00:42:17.498 19:35:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:42:17.498 19:35:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:42:17.498 19:35:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:42:17.498 19:35:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:42:17.498 19:35:33 -- common/autotest_common.sh@10 -- # set +x 00:42:17.498 [2024-04-18 19:35:33.249601] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:42:17.498 [2024-04-18 19:35:33.249960] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141489 ] 00:42:17.756 [2024-04-18 19:35:33.424970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.756 [2024-04-18 19:35:33.676315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:18.013 [2024-04-18 19:35:33.892435] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:18.271 19:35:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:42:18.271 19:35:34 -- common/autotest_common.sh@850 -- # return 0 00:42:18.271 19:35:34 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:42:18.271 19:35:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:42:18.271 19:35:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:42:18.271 19:35:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:42:18.271 19:35:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:42:18.271 19:35:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:18.271 19:35:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:42:18.271 19:35:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:18.271 19:35:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:42:18.837 malloc1 00:42:18.837 19:35:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:19.095 [2024-04-18 19:35:34.800272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:19.095 [2024-04-18 19:35:34.801047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:19.095 [2024-04-18 19:35:34.801352] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:42:19.095 [2024-04-18 19:35:34.801679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:19.095 [2024-04-18 19:35:34.804607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:19.095 [2024-04-18 19:35:34.804885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:19.095 pt1 00:42:19.095 19:35:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:42:19.095 19:35:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:42:19.095 19:35:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:42:19.095 19:35:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:42:19.095 19:35:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:42:19.095 19:35:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:19.095 19:35:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:42:19.095 19:35:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:19.095 19:35:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:42:19.353 malloc2 00:42:19.353 19:35:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:19.611 [2024-04-18 19:35:35.438985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:19.611 [2024-04-18 19:35:35.439530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:19.611 [2024-04-18 19:35:35.439855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:42:19.611 [2024-04-18 19:35:35.440190] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:19.611 [2024-04-18 19:35:35.442940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:19.611 [2024-04-18 19:35:35.443212] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:19.611 pt2 00:42:19.611 19:35:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:42:19.611 19:35:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:42:19.611 19:35:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:42:19.611 19:35:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:42:19.611 19:35:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:42:19.611 19:35:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:19.611 19:35:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:42:19.611 19:35:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:19.611 19:35:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:42:19.868 malloc3 00:42:19.868 19:35:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:42:20.126 [2024-04-18 19:35:36.014732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:42:20.126 [2024-04-18 19:35:36.015528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:20.126 [2024-04-18 19:35:36.015819] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:42:20.126 [2024-04-18 19:35:36.016089] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:20.126 [2024-04-18 19:35:36.018934] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:20.126 [2024-04-18 19:35:36.019249] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:42:20.126 pt3 00:42:20.126 19:35:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:42:20.126 19:35:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:42:20.126 19:35:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:42:20.126 19:35:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:42:20.126 19:35:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:42:20.126 19:35:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:20.126 19:35:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:42:20.126 19:35:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:20.126 19:35:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:42:20.693 malloc4 00:42:20.693 19:35:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:42:20.693 [2024-04-18 19:35:36.609944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:42:20.693 [2024-04-18 19:35:36.610343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:20.693 [2024-04-18 19:35:36.610424] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:42:20.693 [2024-04-18 19:35:36.610556] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:20.693 [2024-04-18 19:35:36.613117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:20.693 [2024-04-18 19:35:36.613314] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:42:20.693 pt4 00:42:20.950 19:35:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:42:20.950 19:35:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:42:20.950 19:35:36 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:42:20.950 [2024-04-18 19:35:36.826157] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:20.951 [2024-04-18 19:35:36.828477] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:20.951 [2024-04-18 19:35:36.828688] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:42:20.951 [2024-04-18 19:35:36.828801] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:42:20.951 [2024-04-18 19:35:36.829135] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:42:20.951 [2024-04-18 19:35:36.829247] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:42:20.951 [2024-04-18 19:35:36.829417] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:42:20.951 [2024-04-18 19:35:36.836659] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:42:20.951 [2024-04-18 19:35:36.836785] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:42:20.951 [2024-04-18 19:35:36.837133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:20.951 19:35:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:21.517 19:35:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:21.517 "name": "raid_bdev1", 00:42:21.517 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:21.517 "strip_size_kb": 64, 00:42:21.517 "state": "online", 00:42:21.517 "raid_level": "raid5f", 00:42:21.517 "superblock": true, 00:42:21.517 "num_base_bdevs": 4, 00:42:21.517 "num_base_bdevs_discovered": 4, 00:42:21.517 "num_base_bdevs_operational": 4, 00:42:21.517 "base_bdevs_list": [ 00:42:21.517 { 00:42:21.517 "name": "pt1", 00:42:21.517 "uuid": "8a750a4e-3ed3-599f-a327-c265405ae0c3", 00:42:21.517 "is_configured": true, 00:42:21.517 "data_offset": 2048, 00:42:21.517 "data_size": 63488 00:42:21.517 }, 00:42:21.517 { 00:42:21.517 "name": "pt2", 00:42:21.517 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:21.517 "is_configured": true, 00:42:21.517 "data_offset": 2048, 00:42:21.517 "data_size": 63488 00:42:21.517 }, 00:42:21.517 { 00:42:21.517 "name": "pt3", 00:42:21.517 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:21.517 "is_configured": true, 00:42:21.517 "data_offset": 2048, 00:42:21.517 "data_size": 63488 00:42:21.517 }, 00:42:21.517 { 00:42:21.517 "name": "pt4", 00:42:21.517 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:21.517 "is_configured": true, 00:42:21.517 "data_offset": 2048, 00:42:21.517 "data_size": 63488 00:42:21.517 } 00:42:21.517 ] 00:42:21.517 }' 00:42:21.517 19:35:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:21.517 19:35:37 -- common/autotest_common.sh@10 -- # set +x 00:42:22.085 19:35:37 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:22.085 19:35:37 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:42:22.085 [2024-04-18 19:35:37.983497] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:22.085 19:35:37 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1d46c325-ec0c-43cc-8d4f-5bafac40f54c 00:42:22.085 19:35:37 -- bdev/bdev_raid.sh@380 -- # '[' -z 1d46c325-ec0c-43cc-8d4f-5bafac40f54c ']' 00:42:22.085 19:35:37 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:42:22.350 [2024-04-18 19:35:38.247367] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:22.350 [2024-04-18 19:35:38.247578] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:22.350 [2024-04-18 19:35:38.247749] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:22.350 [2024-04-18 19:35:38.247918] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:22.350 [2024-04-18 19:35:38.248014] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:42:22.350 19:35:38 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:22.350 19:35:38 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:42:22.925 19:35:38 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:42:22.925 19:35:38 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:42:22.925 19:35:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:42:22.925 19:35:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:42:22.925 19:35:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:42:22.925 19:35:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:42:23.183 19:35:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:42:23.183 19:35:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:42:23.440 19:35:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:42:23.441 19:35:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:42:23.699 19:35:39 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:42:23.699 19:35:39 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:42:24.264 19:35:39 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:42:24.264 19:35:39 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:42:24.264 19:35:39 -- common/autotest_common.sh@638 -- # local es=0 00:42:24.264 19:35:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:42:24.264 19:35:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:24.264 19:35:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:42:24.264 19:35:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:24.264 19:35:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:42:24.264 19:35:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:24.264 19:35:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:42:24.264 19:35:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:24.264 19:35:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:42:24.264 19:35:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:42:24.264 [2024-04-18 19:35:40.159734] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:42:24.264 [2024-04-18 19:35:40.162197] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:42:24.264 [2024-04-18 19:35:40.162442] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:42:24.264 [2024-04-18 19:35:40.162515] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:42:24.264 [2024-04-18 19:35:40.162682] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:42:24.264 [2024-04-18 19:35:40.162853] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:42:24.264 [2024-04-18 19:35:40.162995] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:42:24.264 [2024-04-18 19:35:40.163082] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:42:24.264 [2024-04-18 19:35:40.163142] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:24.264 [2024-04-18 19:35:40.163335] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:42:24.264 request: 00:42:24.264 { 00:42:24.264 "name": "raid_bdev1", 00:42:24.264 "raid_level": "raid5f", 00:42:24.264 "base_bdevs": [ 00:42:24.264 "malloc1", 00:42:24.264 "malloc2", 00:42:24.264 "malloc3", 00:42:24.264 "malloc4" 00:42:24.264 ], 00:42:24.264 "superblock": false, 00:42:24.264 "strip_size_kb": 64, 00:42:24.264 "method": "bdev_raid_create", 00:42:24.264 "req_id": 1 00:42:24.264 } 00:42:24.264 Got JSON-RPC error response 00:42:24.264 response: 00:42:24.264 { 00:42:24.264 "code": -17, 00:42:24.264 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:42:24.264 } 00:42:24.264 19:35:40 -- common/autotest_common.sh@641 -- # es=1 00:42:24.264 19:35:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:42:24.264 19:35:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:42:24.264 19:35:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:42:24.264 19:35:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:42:24.264 19:35:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:24.831 19:35:40 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:42:24.831 19:35:40 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:42:24.831 19:35:40 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:25.090 [2024-04-18 19:35:40.807918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:25.090 [2024-04-18 19:35:40.808197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:25.090 [2024-04-18 19:35:40.808261] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:42:25.090 [2024-04-18 19:35:40.808367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:25.090 [2024-04-18 19:35:40.810874] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:25.090 [2024-04-18 19:35:40.811055] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:25.090 [2024-04-18 19:35:40.811256] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:42:25.090 [2024-04-18 19:35:40.811414] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:25.090 pt1 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:25.090 19:35:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:25.348 19:35:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:25.348 "name": "raid_bdev1", 00:42:25.348 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:25.348 "strip_size_kb": 64, 00:42:25.348 "state": "configuring", 00:42:25.348 "raid_level": "raid5f", 00:42:25.348 "superblock": true, 00:42:25.348 "num_base_bdevs": 4, 00:42:25.348 "num_base_bdevs_discovered": 1, 00:42:25.348 "num_base_bdevs_operational": 4, 00:42:25.348 "base_bdevs_list": [ 00:42:25.348 { 00:42:25.348 "name": "pt1", 00:42:25.348 "uuid": "8a750a4e-3ed3-599f-a327-c265405ae0c3", 00:42:25.348 "is_configured": true, 00:42:25.348 "data_offset": 2048, 00:42:25.348 "data_size": 63488 00:42:25.348 }, 00:42:25.348 { 00:42:25.348 "name": null, 00:42:25.348 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:25.348 "is_configured": false, 00:42:25.348 "data_offset": 2048, 00:42:25.348 "data_size": 63488 00:42:25.348 }, 00:42:25.348 { 00:42:25.348 "name": null, 00:42:25.348 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:25.348 "is_configured": false, 00:42:25.348 "data_offset": 2048, 00:42:25.348 "data_size": 63488 00:42:25.348 }, 00:42:25.348 { 00:42:25.348 "name": null, 00:42:25.348 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:25.348 "is_configured": false, 00:42:25.348 "data_offset": 2048, 00:42:25.348 "data_size": 63488 00:42:25.348 } 00:42:25.348 ] 00:42:25.348 }' 00:42:25.348 19:35:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:25.348 19:35:41 -- common/autotest_common.sh@10 -- # set +x 00:42:25.916 19:35:41 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:42:25.916 19:35:41 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:26.175 [2024-04-18 19:35:41.916622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:26.175 [2024-04-18 19:35:41.916944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:26.175 [2024-04-18 19:35:41.917022] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:42:26.175 [2024-04-18 19:35:41.917121] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:26.175 [2024-04-18 19:35:41.917657] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:26.175 [2024-04-18 19:35:41.917818] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:26.175 [2024-04-18 19:35:41.918046] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:42:26.175 [2024-04-18 19:35:41.918214] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:26.175 pt2 00:42:26.175 19:35:41 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:42:26.433 [2024-04-18 19:35:42.124661] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:26.433 19:35:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:26.691 19:35:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:26.691 "name": "raid_bdev1", 00:42:26.691 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:26.691 "strip_size_kb": 64, 00:42:26.691 "state": "configuring", 00:42:26.691 "raid_level": "raid5f", 00:42:26.691 "superblock": true, 00:42:26.691 "num_base_bdevs": 4, 00:42:26.691 "num_base_bdevs_discovered": 1, 00:42:26.691 "num_base_bdevs_operational": 4, 00:42:26.691 "base_bdevs_list": [ 00:42:26.691 { 00:42:26.691 "name": "pt1", 00:42:26.691 "uuid": "8a750a4e-3ed3-599f-a327-c265405ae0c3", 00:42:26.691 "is_configured": true, 00:42:26.691 "data_offset": 2048, 00:42:26.691 "data_size": 63488 00:42:26.691 }, 00:42:26.691 { 00:42:26.691 "name": null, 00:42:26.691 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:26.691 "is_configured": false, 00:42:26.691 "data_offset": 2048, 00:42:26.691 "data_size": 63488 00:42:26.691 }, 00:42:26.691 { 00:42:26.691 "name": null, 00:42:26.691 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:26.691 "is_configured": false, 00:42:26.691 "data_offset": 2048, 00:42:26.691 "data_size": 63488 00:42:26.691 }, 00:42:26.691 { 00:42:26.691 "name": null, 00:42:26.691 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:26.691 "is_configured": false, 00:42:26.691 "data_offset": 2048, 00:42:26.691 "data_size": 63488 00:42:26.691 } 00:42:26.691 ] 00:42:26.691 }' 00:42:26.691 19:35:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:26.691 19:35:42 -- common/autotest_common.sh@10 -- # set +x 00:42:27.257 19:35:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:42:27.257 19:35:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:42:27.257 19:35:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:27.514 [2024-04-18 19:35:43.424970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:27.514 [2024-04-18 19:35:43.425319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:27.514 [2024-04-18 19:35:43.425498] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:42:27.514 [2024-04-18 19:35:43.425642] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:27.514 [2024-04-18 19:35:43.426326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:27.514 [2024-04-18 19:35:43.426528] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:27.514 [2024-04-18 19:35:43.426795] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:42:27.514 [2024-04-18 19:35:43.426934] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:27.514 pt2 00:42:27.771 19:35:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:42:27.771 19:35:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:42:27.771 19:35:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:42:28.029 [2024-04-18 19:35:43.705000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:42:28.029 [2024-04-18 19:35:43.705228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:28.029 [2024-04-18 19:35:43.705343] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:42:28.029 [2024-04-18 19:35:43.705454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:28.029 [2024-04-18 19:35:43.706035] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:28.029 [2024-04-18 19:35:43.706201] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:42:28.029 [2024-04-18 19:35:43.706408] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:42:28.029 [2024-04-18 19:35:43.706520] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:42:28.029 pt3 00:42:28.029 19:35:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:42:28.029 19:35:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:42:28.029 19:35:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:42:28.288 [2024-04-18 19:35:44.005515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:42:28.288 [2024-04-18 19:35:44.005867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:28.288 [2024-04-18 19:35:44.005956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:42:28.288 [2024-04-18 19:35:44.006166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:28.288 [2024-04-18 19:35:44.006674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:28.288 [2024-04-18 19:35:44.006845] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:42:28.288 [2024-04-18 19:35:44.007040] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:42:28.288 [2024-04-18 19:35:44.007162] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:42:28.288 [2024-04-18 19:35:44.007351] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:42:28.288 [2024-04-18 19:35:44.007496] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:42:28.288 [2024-04-18 19:35:44.007656] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:42:28.288 [2024-04-18 19:35:44.014783] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:42:28.288 [2024-04-18 19:35:44.014913] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:42:28.288 [2024-04-18 19:35:44.015296] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:28.288 pt4 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:28.288 19:35:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:28.547 19:35:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:28.547 "name": "raid_bdev1", 00:42:28.547 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:28.547 "strip_size_kb": 64, 00:42:28.547 "state": "online", 00:42:28.547 "raid_level": "raid5f", 00:42:28.547 "superblock": true, 00:42:28.547 "num_base_bdevs": 4, 00:42:28.547 "num_base_bdevs_discovered": 4, 00:42:28.547 "num_base_bdevs_operational": 4, 00:42:28.547 "base_bdevs_list": [ 00:42:28.547 { 00:42:28.547 "name": "pt1", 00:42:28.547 "uuid": "8a750a4e-3ed3-599f-a327-c265405ae0c3", 00:42:28.547 "is_configured": true, 00:42:28.547 "data_offset": 2048, 00:42:28.547 "data_size": 63488 00:42:28.547 }, 00:42:28.547 { 00:42:28.547 "name": "pt2", 00:42:28.547 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:28.547 "is_configured": true, 00:42:28.547 "data_offset": 2048, 00:42:28.547 "data_size": 63488 00:42:28.547 }, 00:42:28.547 { 00:42:28.547 "name": "pt3", 00:42:28.547 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:28.547 "is_configured": true, 00:42:28.547 "data_offset": 2048, 00:42:28.547 "data_size": 63488 00:42:28.547 }, 00:42:28.547 { 00:42:28.547 "name": "pt4", 00:42:28.547 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:28.547 "is_configured": true, 00:42:28.547 "data_offset": 2048, 00:42:28.547 "data_size": 63488 00:42:28.547 } 00:42:28.547 ] 00:42:28.547 }' 00:42:28.547 19:35:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:28.547 19:35:44 -- common/autotest_common.sh@10 -- # set +x 00:42:29.114 19:35:44 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:29.114 19:35:44 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:42:29.373 [2024-04-18 19:35:45.121275] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:29.373 19:35:45 -- bdev/bdev_raid.sh@430 -- # '[' 1d46c325-ec0c-43cc-8d4f-5bafac40f54c '!=' 1d46c325-ec0c-43cc-8d4f-5bafac40f54c ']' 00:42:29.373 19:35:45 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:42:29.373 19:35:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:42:29.373 19:35:45 -- bdev/bdev_raid.sh@196 -- # return 0 00:42:29.373 19:35:45 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:42:29.632 [2024-04-18 19:35:45.441301] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:29.632 19:35:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:29.892 19:35:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:29.892 "name": "raid_bdev1", 00:42:29.892 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:29.892 "strip_size_kb": 64, 00:42:29.892 "state": "online", 00:42:29.892 "raid_level": "raid5f", 00:42:29.892 "superblock": true, 00:42:29.892 "num_base_bdevs": 4, 00:42:29.892 "num_base_bdevs_discovered": 3, 00:42:29.892 "num_base_bdevs_operational": 3, 00:42:29.892 "base_bdevs_list": [ 00:42:29.892 { 00:42:29.892 "name": null, 00:42:29.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:29.892 "is_configured": false, 00:42:29.892 "data_offset": 2048, 00:42:29.892 "data_size": 63488 00:42:29.892 }, 00:42:29.892 { 00:42:29.892 "name": "pt2", 00:42:29.892 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:29.892 "is_configured": true, 00:42:29.892 "data_offset": 2048, 00:42:29.892 "data_size": 63488 00:42:29.892 }, 00:42:29.892 { 00:42:29.892 "name": "pt3", 00:42:29.892 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:29.892 "is_configured": true, 00:42:29.892 "data_offset": 2048, 00:42:29.892 "data_size": 63488 00:42:29.892 }, 00:42:29.892 { 00:42:29.892 "name": "pt4", 00:42:29.892 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:29.892 "is_configured": true, 00:42:29.892 "data_offset": 2048, 00:42:29.892 "data_size": 63488 00:42:29.892 } 00:42:29.892 ] 00:42:29.892 }' 00:42:29.892 19:35:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:29.892 19:35:45 -- common/autotest_common.sh@10 -- # set +x 00:42:30.459 19:35:46 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:42:30.717 [2024-04-18 19:35:46.597517] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:30.717 [2024-04-18 19:35:46.597604] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:30.717 [2024-04-18 19:35:46.597716] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:30.717 [2024-04-18 19:35:46.597918] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:30.717 [2024-04-18 19:35:46.598023] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:42:30.717 19:35:46 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:30.717 19:35:46 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:42:30.977 19:35:46 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:42:30.977 19:35:46 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:42:30.977 19:35:46 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:42:30.977 19:35:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:42:30.977 19:35:46 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:42:31.236 19:35:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:42:31.236 19:35:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:42:31.236 19:35:47 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:42:31.495 19:35:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:42:31.495 19:35:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:42:31.495 19:35:47 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:42:31.757 19:35:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:42:31.757 19:35:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:42:31.757 19:35:47 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:42:31.757 19:35:47 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:42:31.757 19:35:47 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:32.077 [2024-04-18 19:35:47.807830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:32.077 [2024-04-18 19:35:47.808136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:32.077 [2024-04-18 19:35:47.808202] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:42:32.077 [2024-04-18 19:35:47.808312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:32.077 [2024-04-18 19:35:47.810891] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:32.077 [2024-04-18 19:35:47.811089] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:32.077 [2024-04-18 19:35:47.811316] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:42:32.077 [2024-04-18 19:35:47.811494] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:32.077 pt2 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:32.077 19:35:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:32.336 19:35:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:32.336 "name": "raid_bdev1", 00:42:32.336 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:32.336 "strip_size_kb": 64, 00:42:32.336 "state": "configuring", 00:42:32.336 "raid_level": "raid5f", 00:42:32.336 "superblock": true, 00:42:32.336 "num_base_bdevs": 4, 00:42:32.336 "num_base_bdevs_discovered": 1, 00:42:32.336 "num_base_bdevs_operational": 3, 00:42:32.336 "base_bdevs_list": [ 00:42:32.336 { 00:42:32.336 "name": null, 00:42:32.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:32.336 "is_configured": false, 00:42:32.336 "data_offset": 2048, 00:42:32.336 "data_size": 63488 00:42:32.336 }, 00:42:32.336 { 00:42:32.336 "name": "pt2", 00:42:32.336 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:32.336 "is_configured": true, 00:42:32.336 "data_offset": 2048, 00:42:32.336 "data_size": 63488 00:42:32.336 }, 00:42:32.336 { 00:42:32.336 "name": null, 00:42:32.336 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:32.336 "is_configured": false, 00:42:32.336 "data_offset": 2048, 00:42:32.336 "data_size": 63488 00:42:32.336 }, 00:42:32.336 { 00:42:32.336 "name": null, 00:42:32.336 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:32.336 "is_configured": false, 00:42:32.336 "data_offset": 2048, 00:42:32.336 "data_size": 63488 00:42:32.336 } 00:42:32.336 ] 00:42:32.336 }' 00:42:32.336 19:35:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:32.336 19:35:48 -- common/autotest_common.sh@10 -- # set +x 00:42:32.902 19:35:48 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:42:32.902 19:35:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:42:32.902 19:35:48 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:42:33.161 [2024-04-18 19:35:48.920097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:42:33.161 [2024-04-18 19:35:48.920325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:33.161 [2024-04-18 19:35:48.920472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:42:33.162 [2024-04-18 19:35:48.920595] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:33.162 [2024-04-18 19:35:48.921153] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:33.162 [2024-04-18 19:35:48.921322] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:42:33.162 [2024-04-18 19:35:48.921534] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:42:33.162 [2024-04-18 19:35:48.921651] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:42:33.162 pt3 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:33.162 19:35:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:33.421 19:35:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:33.421 "name": "raid_bdev1", 00:42:33.421 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:33.421 "strip_size_kb": 64, 00:42:33.421 "state": "configuring", 00:42:33.421 "raid_level": "raid5f", 00:42:33.421 "superblock": true, 00:42:33.421 "num_base_bdevs": 4, 00:42:33.421 "num_base_bdevs_discovered": 2, 00:42:33.421 "num_base_bdevs_operational": 3, 00:42:33.421 "base_bdevs_list": [ 00:42:33.421 { 00:42:33.421 "name": null, 00:42:33.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:33.421 "is_configured": false, 00:42:33.421 "data_offset": 2048, 00:42:33.421 "data_size": 63488 00:42:33.421 }, 00:42:33.421 { 00:42:33.421 "name": "pt2", 00:42:33.421 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:33.421 "is_configured": true, 00:42:33.421 "data_offset": 2048, 00:42:33.421 "data_size": 63488 00:42:33.421 }, 00:42:33.421 { 00:42:33.421 "name": "pt3", 00:42:33.421 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:33.421 "is_configured": true, 00:42:33.421 "data_offset": 2048, 00:42:33.421 "data_size": 63488 00:42:33.421 }, 00:42:33.421 { 00:42:33.421 "name": null, 00:42:33.421 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:33.421 "is_configured": false, 00:42:33.421 "data_offset": 2048, 00:42:33.421 "data_size": 63488 00:42:33.421 } 00:42:33.421 ] 00:42:33.421 }' 00:42:33.421 19:35:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:33.421 19:35:49 -- common/autotest_common.sh@10 -- # set +x 00:42:34.045 19:35:49 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:42:34.045 19:35:49 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:42:34.045 19:35:49 -- bdev/bdev_raid.sh@462 -- # i=3 00:42:34.045 19:35:49 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:42:34.303 [2024-04-18 19:35:50.068315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:42:34.303 [2024-04-18 19:35:50.068637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:34.303 [2024-04-18 19:35:50.068716] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:42:34.303 [2024-04-18 19:35:50.068814] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:34.303 [2024-04-18 19:35:50.069326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:34.303 [2024-04-18 19:35:50.069469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:42:34.303 [2024-04-18 19:35:50.069660] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:42:34.303 [2024-04-18 19:35:50.069767] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:42:34.303 [2024-04-18 19:35:50.069944] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:42:34.303 [2024-04-18 19:35:50.070068] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:42:34.303 [2024-04-18 19:35:50.070234] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:42:34.303 [2024-04-18 19:35:50.077754] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:42:34.304 [2024-04-18 19:35:50.077937] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:42:34.304 [2024-04-18 19:35:50.078346] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:34.304 pt4 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:34.304 19:35:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:34.563 19:35:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:34.563 "name": "raid_bdev1", 00:42:34.563 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:34.563 "strip_size_kb": 64, 00:42:34.563 "state": "online", 00:42:34.563 "raid_level": "raid5f", 00:42:34.563 "superblock": true, 00:42:34.563 "num_base_bdevs": 4, 00:42:34.563 "num_base_bdevs_discovered": 3, 00:42:34.563 "num_base_bdevs_operational": 3, 00:42:34.563 "base_bdevs_list": [ 00:42:34.563 { 00:42:34.563 "name": null, 00:42:34.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:34.563 "is_configured": false, 00:42:34.563 "data_offset": 2048, 00:42:34.563 "data_size": 63488 00:42:34.563 }, 00:42:34.563 { 00:42:34.563 "name": "pt2", 00:42:34.563 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:34.563 "is_configured": true, 00:42:34.563 "data_offset": 2048, 00:42:34.563 "data_size": 63488 00:42:34.563 }, 00:42:34.563 { 00:42:34.563 "name": "pt3", 00:42:34.563 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:34.563 "is_configured": true, 00:42:34.563 "data_offset": 2048, 00:42:34.563 "data_size": 63488 00:42:34.563 }, 00:42:34.563 { 00:42:34.563 "name": "pt4", 00:42:34.563 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:34.563 "is_configured": true, 00:42:34.563 "data_offset": 2048, 00:42:34.563 "data_size": 63488 00:42:34.563 } 00:42:34.563 ] 00:42:34.563 }' 00:42:34.563 19:35:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:34.563 19:35:50 -- common/autotest_common.sh@10 -- # set +x 00:42:35.499 19:35:51 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:42:35.499 19:35:51 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:42:35.499 [2024-04-18 19:35:51.315186] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:35.499 [2024-04-18 19:35:51.316291] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:35.499 [2024-04-18 19:35:51.316765] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:35.499 [2024-04-18 19:35:51.317014] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:35.499 [2024-04-18 19:35:51.317236] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:42:35.499 19:35:51 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:35.499 19:35:51 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:42:35.758 19:35:51 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:42:35.758 19:35:51 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:42:35.758 19:35:51 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:36.016 [2024-04-18 19:35:51.795318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:36.016 [2024-04-18 19:35:51.795655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:36.016 [2024-04-18 19:35:51.795735] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:42:36.016 [2024-04-18 19:35:51.795835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:36.016 [2024-04-18 19:35:51.798426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:36.016 [2024-04-18 19:35:51.798615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:36.016 [2024-04-18 19:35:51.798816] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:42:36.016 [2024-04-18 19:35:51.798998] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:36.016 pt1 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:36.016 19:35:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:36.275 19:35:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:36.275 "name": "raid_bdev1", 00:42:36.275 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:36.275 "strip_size_kb": 64, 00:42:36.275 "state": "configuring", 00:42:36.275 "raid_level": "raid5f", 00:42:36.275 "superblock": true, 00:42:36.275 "num_base_bdevs": 4, 00:42:36.275 "num_base_bdevs_discovered": 1, 00:42:36.275 "num_base_bdevs_operational": 4, 00:42:36.275 "base_bdevs_list": [ 00:42:36.275 { 00:42:36.275 "name": "pt1", 00:42:36.275 "uuid": "8a750a4e-3ed3-599f-a327-c265405ae0c3", 00:42:36.275 "is_configured": true, 00:42:36.275 "data_offset": 2048, 00:42:36.275 "data_size": 63488 00:42:36.275 }, 00:42:36.275 { 00:42:36.275 "name": null, 00:42:36.275 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:36.275 "is_configured": false, 00:42:36.275 "data_offset": 2048, 00:42:36.275 "data_size": 63488 00:42:36.275 }, 00:42:36.275 { 00:42:36.275 "name": null, 00:42:36.275 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:36.275 "is_configured": false, 00:42:36.275 "data_offset": 2048, 00:42:36.275 "data_size": 63488 00:42:36.275 }, 00:42:36.275 { 00:42:36.275 "name": null, 00:42:36.275 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:36.275 "is_configured": false, 00:42:36.275 "data_offset": 2048, 00:42:36.275 "data_size": 63488 00:42:36.275 } 00:42:36.275 ] 00:42:36.275 }' 00:42:36.275 19:35:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:36.275 19:35:52 -- common/autotest_common.sh@10 -- # set +x 00:42:37.211 19:35:52 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:42:37.211 19:35:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:42:37.211 19:35:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:42:37.211 19:35:53 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:42:37.211 19:35:53 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:42:37.211 19:35:53 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:42:37.468 19:35:53 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:42:37.468 19:35:53 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:42:37.468 19:35:53 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:42:37.725 19:35:53 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:42:37.725 19:35:53 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:42:37.725 19:35:53 -- bdev/bdev_raid.sh@489 -- # i=3 00:42:37.725 19:35:53 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:42:37.984 [2024-04-18 19:35:53.815786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:42:37.984 [2024-04-18 19:35:53.816065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:37.984 [2024-04-18 19:35:53.816191] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:42:37.984 [2024-04-18 19:35:53.816295] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:37.984 [2024-04-18 19:35:53.816856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:37.984 [2024-04-18 19:35:53.817037] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:42:37.984 [2024-04-18 19:35:53.817273] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:42:37.984 [2024-04-18 19:35:53.817372] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:42:37.984 [2024-04-18 19:35:53.817460] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:37.984 [2024-04-18 19:35:53.817522] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:42:37.984 [2024-04-18 19:35:53.817672] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:42:37.984 pt4 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:37.984 19:35:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:38.243 19:35:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:38.243 "name": "raid_bdev1", 00:42:38.243 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:38.243 "strip_size_kb": 64, 00:42:38.243 "state": "configuring", 00:42:38.243 "raid_level": "raid5f", 00:42:38.243 "superblock": true, 00:42:38.243 "num_base_bdevs": 4, 00:42:38.243 "num_base_bdevs_discovered": 1, 00:42:38.243 "num_base_bdevs_operational": 3, 00:42:38.243 "base_bdevs_list": [ 00:42:38.243 { 00:42:38.243 "name": null, 00:42:38.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:38.243 "is_configured": false, 00:42:38.243 "data_offset": 2048, 00:42:38.243 "data_size": 63488 00:42:38.243 }, 00:42:38.243 { 00:42:38.243 "name": null, 00:42:38.243 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:38.243 "is_configured": false, 00:42:38.243 "data_offset": 2048, 00:42:38.243 "data_size": 63488 00:42:38.243 }, 00:42:38.243 { 00:42:38.243 "name": null, 00:42:38.243 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:38.243 "is_configured": false, 00:42:38.243 "data_offset": 2048, 00:42:38.243 "data_size": 63488 00:42:38.243 }, 00:42:38.243 { 00:42:38.243 "name": "pt4", 00:42:38.243 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:38.243 "is_configured": true, 00:42:38.243 "data_offset": 2048, 00:42:38.243 "data_size": 63488 00:42:38.243 } 00:42:38.243 ] 00:42:38.243 }' 00:42:38.243 19:35:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:38.243 19:35:54 -- common/autotest_common.sh@10 -- # set +x 00:42:39.224 19:35:54 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:42:39.224 19:35:54 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:42:39.224 19:35:54 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:39.224 [2024-04-18 19:35:55.036019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:39.224 [2024-04-18 19:35:55.036259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:39.224 [2024-04-18 19:35:55.036393] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:42:39.224 [2024-04-18 19:35:55.036490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:39.225 [2024-04-18 19:35:55.037059] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:39.225 [2024-04-18 19:35:55.037224] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:39.225 [2024-04-18 19:35:55.037423] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:42:39.225 [2024-04-18 19:35:55.037531] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:39.225 pt2 00:42:39.225 19:35:55 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:42:39.225 19:35:55 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:42:39.225 19:35:55 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:42:39.483 [2024-04-18 19:35:55.280119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:42:39.483 [2024-04-18 19:35:55.280415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:39.483 [2024-04-18 19:35:55.280483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:42:39.483 [2024-04-18 19:35:55.280580] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:39.483 [2024-04-18 19:35:55.281079] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:39.483 [2024-04-18 19:35:55.281274] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:42:39.483 [2024-04-18 19:35:55.281476] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:42:39.483 [2024-04-18 19:35:55.281603] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:42:39.483 [2024-04-18 19:35:55.281801] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:42:39.483 [2024-04-18 19:35:55.281912] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:42:39.483 [2024-04-18 19:35:55.282037] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:42:39.483 [2024-04-18 19:35:55.289899] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:42:39.483 [2024-04-18 19:35:55.290031] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:42:39.483 [2024-04-18 19:35:55.290395] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:39.483 pt3 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:39.483 19:35:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:40.003 19:35:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:40.003 "name": "raid_bdev1", 00:42:40.003 "uuid": "1d46c325-ec0c-43cc-8d4f-5bafac40f54c", 00:42:40.003 "strip_size_kb": 64, 00:42:40.003 "state": "online", 00:42:40.003 "raid_level": "raid5f", 00:42:40.003 "superblock": true, 00:42:40.003 "num_base_bdevs": 4, 00:42:40.003 "num_base_bdevs_discovered": 3, 00:42:40.003 "num_base_bdevs_operational": 3, 00:42:40.003 "base_bdevs_list": [ 00:42:40.003 { 00:42:40.003 "name": null, 00:42:40.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:40.003 "is_configured": false, 00:42:40.003 "data_offset": 2048, 00:42:40.003 "data_size": 63488 00:42:40.003 }, 00:42:40.003 { 00:42:40.003 "name": "pt2", 00:42:40.003 "uuid": "aa470bf1-d4a4-5df5-8a0b-54c422a870bd", 00:42:40.003 "is_configured": true, 00:42:40.003 "data_offset": 2048, 00:42:40.003 "data_size": 63488 00:42:40.003 }, 00:42:40.003 { 00:42:40.003 "name": "pt3", 00:42:40.003 "uuid": "ab1204d6-3571-5c34-963d-a3e0bbcef148", 00:42:40.003 "is_configured": true, 00:42:40.003 "data_offset": 2048, 00:42:40.003 "data_size": 63488 00:42:40.003 }, 00:42:40.003 { 00:42:40.003 "name": "pt4", 00:42:40.003 "uuid": "0e94b979-db65-5988-a06a-a73c611d044a", 00:42:40.003 "is_configured": true, 00:42:40.003 "data_offset": 2048, 00:42:40.003 "data_size": 63488 00:42:40.003 } 00:42:40.003 ] 00:42:40.003 }' 00:42:40.003 19:35:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:40.003 19:35:55 -- common/autotest_common.sh@10 -- # set +x 00:42:40.569 19:35:56 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:42:40.569 19:35:56 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:40.569 [2024-04-18 19:35:56.402004] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:40.569 19:35:56 -- bdev/bdev_raid.sh@506 -- # '[' 1d46c325-ec0c-43cc-8d4f-5bafac40f54c '!=' 1d46c325-ec0c-43cc-8d4f-5bafac40f54c ']' 00:42:40.569 19:35:56 -- bdev/bdev_raid.sh@511 -- # killprocess 141489 00:42:40.569 19:35:56 -- common/autotest_common.sh@936 -- # '[' -z 141489 ']' 00:42:40.569 19:35:56 -- common/autotest_common.sh@940 -- # kill -0 141489 00:42:40.569 19:35:56 -- common/autotest_common.sh@941 -- # uname 00:42:40.569 19:35:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:42:40.569 19:35:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141489 00:42:40.569 killing process with pid 141489 00:42:40.569 19:35:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:42:40.569 19:35:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:42:40.569 19:35:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141489' 00:42:40.569 19:35:56 -- common/autotest_common.sh@955 -- # kill 141489 00:42:40.569 19:35:56 -- common/autotest_common.sh@960 -- # wait 141489 00:42:40.569 [2024-04-18 19:35:56.442969] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:40.569 [2024-04-18 19:35:56.443055] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:40.569 [2024-04-18 19:35:56.443129] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:40.569 [2024-04-18 19:35:56.443140] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:42:41.134 [2024-04-18 19:35:56.874076] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:42.508 ************************************ 00:42:42.508 END TEST raid5f_superblock_test 00:42:42.508 ************************************ 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@513 -- # return 0 00:42:42.508 00:42:42.508 real 0m25.102s 00:42:42.508 user 0m45.610s 00:42:42.508 sys 0m3.102s 00:42:42.508 19:35:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:42.508 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:42:42.508 19:35:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:42:42.508 19:35:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:42.508 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:42:42.508 ************************************ 00:42:42.508 START TEST raid5f_rebuild_test 00:42:42.508 ************************************ 00:42:42.508 19:35:58 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 4 false false 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:42:42.508 19:35:58 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:42:42.509 19:35:58 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:42:42.509 19:35:58 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:42:42.509 19:35:58 -- bdev/bdev_raid.sh@544 -- # raid_pid=142242 00:42:42.509 19:35:58 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:42.509 19:35:58 -- bdev/bdev_raid.sh@545 -- # waitforlisten 142242 /var/tmp/spdk-raid.sock 00:42:42.509 19:35:58 -- common/autotest_common.sh@817 -- # '[' -z 142242 ']' 00:42:42.509 19:35:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:42:42.509 19:35:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:42:42.509 19:35:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:42:42.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:42:42.509 19:35:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:42:42.509 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:42:42.766 [2024-04-18 19:35:58.449331] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:42:42.766 [2024-04-18 19:35:58.449687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142242 ] 00:42:42.766 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:42.766 Zero copy mechanism will not be used. 00:42:42.766 [2024-04-18 19:35:58.614737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:43.023 [2024-04-18 19:35:58.844546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:43.282 [2024-04-18 19:35:59.106788] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:43.541 19:35:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:42:43.541 19:35:59 -- common/autotest_common.sh@850 -- # return 0 00:42:43.541 19:35:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:43.541 19:35:59 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:42:43.541 19:35:59 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:42:43.800 BaseBdev1 00:42:43.800 19:35:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:43.800 19:35:59 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:42:43.800 19:35:59 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:42:44.058 BaseBdev2 00:42:44.058 19:35:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:44.058 19:35:59 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:42:44.058 19:35:59 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:42:44.318 BaseBdev3 00:42:44.318 19:36:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:44.318 19:36:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:42:44.318 19:36:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:42:44.886 BaseBdev4 00:42:44.886 19:36:00 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:42:45.145 spare_malloc 00:42:45.145 19:36:00 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:45.403 spare_delay 00:42:45.403 19:36:01 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:42:45.662 [2024-04-18 19:36:01.339117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:45.662 [2024-04-18 19:36:01.339773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:45.662 [2024-04-18 19:36:01.339921] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:42:45.663 [2024-04-18 19:36:01.340040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:45.663 [2024-04-18 19:36:01.342764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:45.663 [2024-04-18 19:36:01.342943] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:45.663 spare 00:42:45.663 19:36:01 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:42:45.663 [2024-04-18 19:36:01.559374] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:45.663 [2024-04-18 19:36:01.561844] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:45.663 [2024-04-18 19:36:01.562042] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:45.663 [2024-04-18 19:36:01.562175] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:45.663 [2024-04-18 19:36:01.562325] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:42:45.663 [2024-04-18 19:36:01.562404] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:42:45.663 [2024-04-18 19:36:01.562622] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:42:45.663 [2024-04-18 19:36:01.572102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:42:45.663 [2024-04-18 19:36:01.572254] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:42:45.663 [2024-04-18 19:36:01.572587] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:45.921 19:36:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:46.181 19:36:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:46.181 "name": "raid_bdev1", 00:42:46.181 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:46.181 "strip_size_kb": 64, 00:42:46.181 "state": "online", 00:42:46.181 "raid_level": "raid5f", 00:42:46.181 "superblock": false, 00:42:46.181 "num_base_bdevs": 4, 00:42:46.181 "num_base_bdevs_discovered": 4, 00:42:46.181 "num_base_bdevs_operational": 4, 00:42:46.181 "base_bdevs_list": [ 00:42:46.182 { 00:42:46.182 "name": "BaseBdev1", 00:42:46.182 "uuid": "47fae9a0-4d1c-468b-871d-0930968d06fc", 00:42:46.182 "is_configured": true, 00:42:46.182 "data_offset": 0, 00:42:46.182 "data_size": 65536 00:42:46.182 }, 00:42:46.182 { 00:42:46.182 "name": "BaseBdev2", 00:42:46.182 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:46.182 "is_configured": true, 00:42:46.182 "data_offset": 0, 00:42:46.182 "data_size": 65536 00:42:46.182 }, 00:42:46.182 { 00:42:46.182 "name": "BaseBdev3", 00:42:46.182 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:46.182 "is_configured": true, 00:42:46.182 "data_offset": 0, 00:42:46.182 "data_size": 65536 00:42:46.182 }, 00:42:46.182 { 00:42:46.182 "name": "BaseBdev4", 00:42:46.182 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:46.182 "is_configured": true, 00:42:46.182 "data_offset": 0, 00:42:46.182 "data_size": 65536 00:42:46.182 } 00:42:46.182 ] 00:42:46.182 }' 00:42:46.182 19:36:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:46.182 19:36:01 -- common/autotest_common.sh@10 -- # set +x 00:42:46.749 19:36:02 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:46.749 19:36:02 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:42:47.316 [2024-04-18 19:36:02.951189] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:47.316 19:36:02 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:42:47.316 19:36:02 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:47.316 19:36:02 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:47.575 19:36:03 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:42:47.575 19:36:03 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:42:47.575 19:36:03 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:42:47.575 19:36:03 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:42:47.575 19:36:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:47.575 19:36:03 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:42:47.575 19:36:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:47.575 19:36:03 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:42:47.575 19:36:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:47.575 19:36:03 -- bdev/nbd_common.sh@12 -- # local i 00:42:47.575 19:36:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:47.575 19:36:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:47.575 19:36:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:42:47.836 [2024-04-18 19:36:03.535118] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:42:47.836 /dev/nbd0 00:42:47.836 19:36:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:47.836 19:36:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:47.836 19:36:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:42:47.836 19:36:03 -- common/autotest_common.sh@855 -- # local i 00:42:47.836 19:36:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:42:47.836 19:36:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:42:47.836 19:36:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:42:47.836 19:36:03 -- common/autotest_common.sh@859 -- # break 00:42:47.836 19:36:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:42:47.836 19:36:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:42:47.836 19:36:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:47.836 1+0 records in 00:42:47.836 1+0 records out 00:42:47.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413615 s, 9.9 MB/s 00:42:47.836 19:36:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:47.837 19:36:03 -- common/autotest_common.sh@872 -- # size=4096 00:42:47.837 19:36:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:47.837 19:36:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:42:47.837 19:36:03 -- common/autotest_common.sh@875 -- # return 0 00:42:47.837 19:36:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:47.837 19:36:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:47.837 19:36:03 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:42:47.837 19:36:03 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:42:47.837 19:36:03 -- bdev/bdev_raid.sh@582 -- # echo 192 00:42:47.837 19:36:03 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:42:48.411 512+0 records in 00:42:48.411 512+0 records out 00:42:48.411 100663296 bytes (101 MB, 96 MiB) copied, 0.654969 s, 154 MB/s 00:42:48.411 19:36:04 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:42:48.411 19:36:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:48.411 19:36:04 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:42:48.411 19:36:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:48.411 19:36:04 -- bdev/nbd_common.sh@51 -- # local i 00:42:48.411 19:36:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:48.411 19:36:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:42:48.675 19:36:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:48.675 [2024-04-18 19:36:04.556285] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:48.676 19:36:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:48.676 19:36:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:48.676 19:36:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:48.676 19:36:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:48.676 19:36:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:48.676 19:36:04 -- bdev/nbd_common.sh@41 -- # break 00:42:48.676 19:36:04 -- bdev/nbd_common.sh@45 -- # return 0 00:42:48.676 19:36:04 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:42:48.944 [2024-04-18 19:36:04.822823] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:48.944 19:36:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:49.536 19:36:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:49.536 "name": "raid_bdev1", 00:42:49.536 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:49.536 "strip_size_kb": 64, 00:42:49.536 "state": "online", 00:42:49.536 "raid_level": "raid5f", 00:42:49.536 "superblock": false, 00:42:49.536 "num_base_bdevs": 4, 00:42:49.536 "num_base_bdevs_discovered": 3, 00:42:49.536 "num_base_bdevs_operational": 3, 00:42:49.536 "base_bdevs_list": [ 00:42:49.536 { 00:42:49.536 "name": null, 00:42:49.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:49.536 "is_configured": false, 00:42:49.536 "data_offset": 0, 00:42:49.536 "data_size": 65536 00:42:49.536 }, 00:42:49.536 { 00:42:49.536 "name": "BaseBdev2", 00:42:49.536 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:49.536 "is_configured": true, 00:42:49.536 "data_offset": 0, 00:42:49.536 "data_size": 65536 00:42:49.536 }, 00:42:49.536 { 00:42:49.536 "name": "BaseBdev3", 00:42:49.536 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:49.536 "is_configured": true, 00:42:49.536 "data_offset": 0, 00:42:49.536 "data_size": 65536 00:42:49.536 }, 00:42:49.536 { 00:42:49.536 "name": "BaseBdev4", 00:42:49.536 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:49.536 "is_configured": true, 00:42:49.536 "data_offset": 0, 00:42:49.536 "data_size": 65536 00:42:49.536 } 00:42:49.536 ] 00:42:49.536 }' 00:42:49.536 19:36:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:49.536 19:36:05 -- common/autotest_common.sh@10 -- # set +x 00:42:50.108 19:36:05 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:50.367 [2024-04-18 19:36:06.211106] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:42:50.367 [2024-04-18 19:36:06.211344] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:50.367 [2024-04-18 19:36:06.230627] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d220 00:42:50.367 [2024-04-18 19:36:06.242075] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:50.367 19:36:06 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:42:51.742 19:36:07 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:51.743 "name": "raid_bdev1", 00:42:51.743 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:51.743 "strip_size_kb": 64, 00:42:51.743 "state": "online", 00:42:51.743 "raid_level": "raid5f", 00:42:51.743 "superblock": false, 00:42:51.743 "num_base_bdevs": 4, 00:42:51.743 "num_base_bdevs_discovered": 4, 00:42:51.743 "num_base_bdevs_operational": 4, 00:42:51.743 "process": { 00:42:51.743 "type": "rebuild", 00:42:51.743 "target": "spare", 00:42:51.743 "progress": { 00:42:51.743 "blocks": 23040, 00:42:51.743 "percent": 11 00:42:51.743 } 00:42:51.743 }, 00:42:51.743 "base_bdevs_list": [ 00:42:51.743 { 00:42:51.743 "name": "spare", 00:42:51.743 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:42:51.743 "is_configured": true, 00:42:51.743 "data_offset": 0, 00:42:51.743 "data_size": 65536 00:42:51.743 }, 00:42:51.743 { 00:42:51.743 "name": "BaseBdev2", 00:42:51.743 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:51.743 "is_configured": true, 00:42:51.743 "data_offset": 0, 00:42:51.743 "data_size": 65536 00:42:51.743 }, 00:42:51.743 { 00:42:51.743 "name": "BaseBdev3", 00:42:51.743 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:51.743 "is_configured": true, 00:42:51.743 "data_offset": 0, 00:42:51.743 "data_size": 65536 00:42:51.743 }, 00:42:51.743 { 00:42:51.743 "name": "BaseBdev4", 00:42:51.743 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:51.743 "is_configured": true, 00:42:51.743 "data_offset": 0, 00:42:51.743 "data_size": 65536 00:42:51.743 } 00:42:51.743 ] 00:42:51.743 }' 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:51.743 19:36:07 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:42:52.044 [2024-04-18 19:36:07.913429] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:52.324 [2024-04-18 19:36:07.956101] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:52.324 [2024-04-18 19:36:07.956462] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:52.324 19:36:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:52.583 19:36:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:52.583 "name": "raid_bdev1", 00:42:52.583 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:52.583 "strip_size_kb": 64, 00:42:52.583 "state": "online", 00:42:52.584 "raid_level": "raid5f", 00:42:52.584 "superblock": false, 00:42:52.584 "num_base_bdevs": 4, 00:42:52.584 "num_base_bdevs_discovered": 3, 00:42:52.584 "num_base_bdevs_operational": 3, 00:42:52.584 "base_bdevs_list": [ 00:42:52.584 { 00:42:52.584 "name": null, 00:42:52.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:52.584 "is_configured": false, 00:42:52.584 "data_offset": 0, 00:42:52.584 "data_size": 65536 00:42:52.584 }, 00:42:52.584 { 00:42:52.584 "name": "BaseBdev2", 00:42:52.584 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:52.584 "is_configured": true, 00:42:52.584 "data_offset": 0, 00:42:52.584 "data_size": 65536 00:42:52.584 }, 00:42:52.584 { 00:42:52.584 "name": "BaseBdev3", 00:42:52.584 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:52.584 "is_configured": true, 00:42:52.584 "data_offset": 0, 00:42:52.584 "data_size": 65536 00:42:52.584 }, 00:42:52.584 { 00:42:52.584 "name": "BaseBdev4", 00:42:52.584 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:52.584 "is_configured": true, 00:42:52.584 "data_offset": 0, 00:42:52.584 "data_size": 65536 00:42:52.584 } 00:42:52.584 ] 00:42:52.584 }' 00:42:52.584 19:36:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:52.584 19:36:08 -- common/autotest_common.sh@10 -- # set +x 00:42:53.152 19:36:09 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:53.152 19:36:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:53.152 19:36:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:42:53.152 19:36:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:42:53.152 19:36:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:53.152 19:36:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:53.152 19:36:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:53.410 19:36:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:53.410 "name": "raid_bdev1", 00:42:53.410 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:53.410 "strip_size_kb": 64, 00:42:53.410 "state": "online", 00:42:53.410 "raid_level": "raid5f", 00:42:53.410 "superblock": false, 00:42:53.410 "num_base_bdevs": 4, 00:42:53.410 "num_base_bdevs_discovered": 3, 00:42:53.410 "num_base_bdevs_operational": 3, 00:42:53.410 "base_bdevs_list": [ 00:42:53.410 { 00:42:53.410 "name": null, 00:42:53.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:53.410 "is_configured": false, 00:42:53.410 "data_offset": 0, 00:42:53.410 "data_size": 65536 00:42:53.410 }, 00:42:53.410 { 00:42:53.410 "name": "BaseBdev2", 00:42:53.410 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:53.410 "is_configured": true, 00:42:53.410 "data_offset": 0, 00:42:53.410 "data_size": 65536 00:42:53.410 }, 00:42:53.410 { 00:42:53.410 "name": "BaseBdev3", 00:42:53.410 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:53.410 "is_configured": true, 00:42:53.410 "data_offset": 0, 00:42:53.410 "data_size": 65536 00:42:53.410 }, 00:42:53.410 { 00:42:53.410 "name": "BaseBdev4", 00:42:53.410 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:53.410 "is_configured": true, 00:42:53.410 "data_offset": 0, 00:42:53.411 "data_size": 65536 00:42:53.411 } 00:42:53.411 ] 00:42:53.411 }' 00:42:53.411 19:36:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:53.669 19:36:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:53.669 19:36:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:53.669 19:36:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:42:53.669 19:36:09 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:53.928 [2024-04-18 19:36:09.706937] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:42:53.928 [2024-04-18 19:36:09.707568] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:53.928 [2024-04-18 19:36:09.724302] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d3c0 00:42:53.928 [2024-04-18 19:36:09.735199] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:53.928 19:36:09 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:42:54.883 19:36:10 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:54.883 19:36:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:54.883 19:36:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:54.883 19:36:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:54.883 19:36:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:54.883 19:36:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:54.883 19:36:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:55.141 19:36:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:55.141 "name": "raid_bdev1", 00:42:55.141 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:55.141 "strip_size_kb": 64, 00:42:55.141 "state": "online", 00:42:55.141 "raid_level": "raid5f", 00:42:55.141 "superblock": false, 00:42:55.141 "num_base_bdevs": 4, 00:42:55.141 "num_base_bdevs_discovered": 4, 00:42:55.141 "num_base_bdevs_operational": 4, 00:42:55.141 "process": { 00:42:55.141 "type": "rebuild", 00:42:55.141 "target": "spare", 00:42:55.141 "progress": { 00:42:55.142 "blocks": 23040, 00:42:55.142 "percent": 11 00:42:55.142 } 00:42:55.142 }, 00:42:55.142 "base_bdevs_list": [ 00:42:55.142 { 00:42:55.142 "name": "spare", 00:42:55.142 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:42:55.142 "is_configured": true, 00:42:55.142 "data_offset": 0, 00:42:55.142 "data_size": 65536 00:42:55.142 }, 00:42:55.142 { 00:42:55.142 "name": "BaseBdev2", 00:42:55.142 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:55.142 "is_configured": true, 00:42:55.142 "data_offset": 0, 00:42:55.142 "data_size": 65536 00:42:55.142 }, 00:42:55.142 { 00:42:55.142 "name": "BaseBdev3", 00:42:55.142 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:55.142 "is_configured": true, 00:42:55.142 "data_offset": 0, 00:42:55.142 "data_size": 65536 00:42:55.142 }, 00:42:55.142 { 00:42:55.142 "name": "BaseBdev4", 00:42:55.142 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:55.142 "is_configured": true, 00:42:55.142 "data_offset": 0, 00:42:55.142 "data_size": 65536 00:42:55.142 } 00:42:55.142 ] 00:42:55.142 }' 00:42:55.142 19:36:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@657 -- # local timeout=816 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:55.400 19:36:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:55.659 19:36:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:55.659 "name": "raid_bdev1", 00:42:55.659 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:55.659 "strip_size_kb": 64, 00:42:55.659 "state": "online", 00:42:55.659 "raid_level": "raid5f", 00:42:55.659 "superblock": false, 00:42:55.659 "num_base_bdevs": 4, 00:42:55.659 "num_base_bdevs_discovered": 4, 00:42:55.659 "num_base_bdevs_operational": 4, 00:42:55.659 "process": { 00:42:55.659 "type": "rebuild", 00:42:55.659 "target": "spare", 00:42:55.659 "progress": { 00:42:55.659 "blocks": 30720, 00:42:55.659 "percent": 15 00:42:55.659 } 00:42:55.659 }, 00:42:55.659 "base_bdevs_list": [ 00:42:55.659 { 00:42:55.659 "name": "spare", 00:42:55.659 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:42:55.659 "is_configured": true, 00:42:55.659 "data_offset": 0, 00:42:55.659 "data_size": 65536 00:42:55.659 }, 00:42:55.659 { 00:42:55.659 "name": "BaseBdev2", 00:42:55.659 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:55.659 "is_configured": true, 00:42:55.659 "data_offset": 0, 00:42:55.659 "data_size": 65536 00:42:55.659 }, 00:42:55.659 { 00:42:55.659 "name": "BaseBdev3", 00:42:55.659 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:55.659 "is_configured": true, 00:42:55.659 "data_offset": 0, 00:42:55.659 "data_size": 65536 00:42:55.659 }, 00:42:55.659 { 00:42:55.659 "name": "BaseBdev4", 00:42:55.659 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:55.659 "is_configured": true, 00:42:55.659 "data_offset": 0, 00:42:55.659 "data_size": 65536 00:42:55.659 } 00:42:55.659 ] 00:42:55.659 }' 00:42:55.659 19:36:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:55.659 19:36:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:55.659 19:36:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:55.659 19:36:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:55.659 19:36:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:57.035 "name": "raid_bdev1", 00:42:57.035 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:57.035 "strip_size_kb": 64, 00:42:57.035 "state": "online", 00:42:57.035 "raid_level": "raid5f", 00:42:57.035 "superblock": false, 00:42:57.035 "num_base_bdevs": 4, 00:42:57.035 "num_base_bdevs_discovered": 4, 00:42:57.035 "num_base_bdevs_operational": 4, 00:42:57.035 "process": { 00:42:57.035 "type": "rebuild", 00:42:57.035 "target": "spare", 00:42:57.035 "progress": { 00:42:57.035 "blocks": 57600, 00:42:57.035 "percent": 29 00:42:57.035 } 00:42:57.035 }, 00:42:57.035 "base_bdevs_list": [ 00:42:57.035 { 00:42:57.035 "name": "spare", 00:42:57.035 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:42:57.035 "is_configured": true, 00:42:57.035 "data_offset": 0, 00:42:57.035 "data_size": 65536 00:42:57.035 }, 00:42:57.035 { 00:42:57.035 "name": "BaseBdev2", 00:42:57.035 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:57.035 "is_configured": true, 00:42:57.035 "data_offset": 0, 00:42:57.035 "data_size": 65536 00:42:57.035 }, 00:42:57.035 { 00:42:57.035 "name": "BaseBdev3", 00:42:57.035 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:57.035 "is_configured": true, 00:42:57.035 "data_offset": 0, 00:42:57.035 "data_size": 65536 00:42:57.035 }, 00:42:57.035 { 00:42:57.035 "name": "BaseBdev4", 00:42:57.035 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:57.035 "is_configured": true, 00:42:57.035 "data_offset": 0, 00:42:57.035 "data_size": 65536 00:42:57.035 } 00:42:57.035 ] 00:42:57.035 }' 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:57.035 19:36:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:57.294 19:36:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:57.294 19:36:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:42:58.253 19:36:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:58.253 19:36:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:58.253 19:36:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:58.253 19:36:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:58.253 19:36:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:58.253 19:36:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:58.253 19:36:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:58.253 19:36:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:58.511 19:36:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:58.511 "name": "raid_bdev1", 00:42:58.511 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:58.511 "strip_size_kb": 64, 00:42:58.511 "state": "online", 00:42:58.511 "raid_level": "raid5f", 00:42:58.511 "superblock": false, 00:42:58.511 "num_base_bdevs": 4, 00:42:58.511 "num_base_bdevs_discovered": 4, 00:42:58.511 "num_base_bdevs_operational": 4, 00:42:58.511 "process": { 00:42:58.511 "type": "rebuild", 00:42:58.511 "target": "spare", 00:42:58.511 "progress": { 00:42:58.511 "blocks": 84480, 00:42:58.511 "percent": 42 00:42:58.511 } 00:42:58.511 }, 00:42:58.511 "base_bdevs_list": [ 00:42:58.511 { 00:42:58.511 "name": "spare", 00:42:58.511 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:42:58.511 "is_configured": true, 00:42:58.511 "data_offset": 0, 00:42:58.511 "data_size": 65536 00:42:58.511 }, 00:42:58.511 { 00:42:58.511 "name": "BaseBdev2", 00:42:58.511 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:58.511 "is_configured": true, 00:42:58.511 "data_offset": 0, 00:42:58.511 "data_size": 65536 00:42:58.511 }, 00:42:58.511 { 00:42:58.511 "name": "BaseBdev3", 00:42:58.511 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:58.511 "is_configured": true, 00:42:58.511 "data_offset": 0, 00:42:58.511 "data_size": 65536 00:42:58.511 }, 00:42:58.511 { 00:42:58.511 "name": "BaseBdev4", 00:42:58.511 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:58.511 "is_configured": true, 00:42:58.511 "data_offset": 0, 00:42:58.511 "data_size": 65536 00:42:58.511 } 00:42:58.511 ] 00:42:58.511 }' 00:42:58.511 19:36:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:58.511 19:36:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:58.511 19:36:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:58.511 19:36:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:58.511 19:36:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:59.886 "name": "raid_bdev1", 00:42:59.886 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:42:59.886 "strip_size_kb": 64, 00:42:59.886 "state": "online", 00:42:59.886 "raid_level": "raid5f", 00:42:59.886 "superblock": false, 00:42:59.886 "num_base_bdevs": 4, 00:42:59.886 "num_base_bdevs_discovered": 4, 00:42:59.886 "num_base_bdevs_operational": 4, 00:42:59.886 "process": { 00:42:59.886 "type": "rebuild", 00:42:59.886 "target": "spare", 00:42:59.886 "progress": { 00:42:59.886 "blocks": 113280, 00:42:59.886 "percent": 57 00:42:59.886 } 00:42:59.886 }, 00:42:59.886 "base_bdevs_list": [ 00:42:59.886 { 00:42:59.886 "name": "spare", 00:42:59.886 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:42:59.886 "is_configured": true, 00:42:59.886 "data_offset": 0, 00:42:59.886 "data_size": 65536 00:42:59.886 }, 00:42:59.886 { 00:42:59.886 "name": "BaseBdev2", 00:42:59.886 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:42:59.886 "is_configured": true, 00:42:59.886 "data_offset": 0, 00:42:59.886 "data_size": 65536 00:42:59.886 }, 00:42:59.886 { 00:42:59.886 "name": "BaseBdev3", 00:42:59.886 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:42:59.886 "is_configured": true, 00:42:59.886 "data_offset": 0, 00:42:59.886 "data_size": 65536 00:42:59.886 }, 00:42:59.886 { 00:42:59.886 "name": "BaseBdev4", 00:42:59.886 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:42:59.886 "is_configured": true, 00:42:59.886 "data_offset": 0, 00:42:59.886 "data_size": 65536 00:42:59.886 } 00:42:59.886 ] 00:42:59.886 }' 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:59.886 19:36:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:00.145 19:36:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:00.145 19:36:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:01.088 19:36:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:01.088 19:36:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:01.088 19:36:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:01.088 19:36:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:01.088 19:36:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:01.088 19:36:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:01.088 19:36:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:01.088 19:36:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:01.346 19:36:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:01.346 "name": "raid_bdev1", 00:43:01.346 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:43:01.346 "strip_size_kb": 64, 00:43:01.346 "state": "online", 00:43:01.346 "raid_level": "raid5f", 00:43:01.346 "superblock": false, 00:43:01.346 "num_base_bdevs": 4, 00:43:01.346 "num_base_bdevs_discovered": 4, 00:43:01.346 "num_base_bdevs_operational": 4, 00:43:01.346 "process": { 00:43:01.346 "type": "rebuild", 00:43:01.346 "target": "spare", 00:43:01.346 "progress": { 00:43:01.346 "blocks": 140160, 00:43:01.346 "percent": 71 00:43:01.346 } 00:43:01.346 }, 00:43:01.346 "base_bdevs_list": [ 00:43:01.346 { 00:43:01.346 "name": "spare", 00:43:01.346 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:43:01.346 "is_configured": true, 00:43:01.346 "data_offset": 0, 00:43:01.346 "data_size": 65536 00:43:01.346 }, 00:43:01.346 { 00:43:01.346 "name": "BaseBdev2", 00:43:01.346 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:43:01.346 "is_configured": true, 00:43:01.346 "data_offset": 0, 00:43:01.346 "data_size": 65536 00:43:01.346 }, 00:43:01.346 { 00:43:01.346 "name": "BaseBdev3", 00:43:01.346 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:43:01.346 "is_configured": true, 00:43:01.346 "data_offset": 0, 00:43:01.346 "data_size": 65536 00:43:01.346 }, 00:43:01.346 { 00:43:01.346 "name": "BaseBdev4", 00:43:01.346 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:43:01.346 "is_configured": true, 00:43:01.346 "data_offset": 0, 00:43:01.346 "data_size": 65536 00:43:01.346 } 00:43:01.346 ] 00:43:01.346 }' 00:43:01.346 19:36:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:01.346 19:36:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:01.346 19:36:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:01.346 19:36:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:01.346 19:36:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:02.722 "name": "raid_bdev1", 00:43:02.722 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:43:02.722 "strip_size_kb": 64, 00:43:02.722 "state": "online", 00:43:02.722 "raid_level": "raid5f", 00:43:02.722 "superblock": false, 00:43:02.722 "num_base_bdevs": 4, 00:43:02.722 "num_base_bdevs_discovered": 4, 00:43:02.722 "num_base_bdevs_operational": 4, 00:43:02.722 "process": { 00:43:02.722 "type": "rebuild", 00:43:02.722 "target": "spare", 00:43:02.722 "progress": { 00:43:02.722 "blocks": 167040, 00:43:02.722 "percent": 84 00:43:02.722 } 00:43:02.722 }, 00:43:02.722 "base_bdevs_list": [ 00:43:02.722 { 00:43:02.722 "name": "spare", 00:43:02.722 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:43:02.722 "is_configured": true, 00:43:02.722 "data_offset": 0, 00:43:02.722 "data_size": 65536 00:43:02.722 }, 00:43:02.722 { 00:43:02.722 "name": "BaseBdev2", 00:43:02.722 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:43:02.722 "is_configured": true, 00:43:02.722 "data_offset": 0, 00:43:02.722 "data_size": 65536 00:43:02.722 }, 00:43:02.722 { 00:43:02.722 "name": "BaseBdev3", 00:43:02.722 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:43:02.722 "is_configured": true, 00:43:02.722 "data_offset": 0, 00:43:02.722 "data_size": 65536 00:43:02.722 }, 00:43:02.722 { 00:43:02.722 "name": "BaseBdev4", 00:43:02.722 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:43:02.722 "is_configured": true, 00:43:02.722 "data_offset": 0, 00:43:02.722 "data_size": 65536 00:43:02.722 } 00:43:02.722 ] 00:43:02.722 }' 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:02.722 19:36:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:02.981 19:36:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:02.981 19:36:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:03.916 19:36:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:03.916 19:36:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:03.916 19:36:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:03.916 19:36:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:03.916 19:36:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:03.916 19:36:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:03.916 19:36:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:03.916 19:36:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:04.173 19:36:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:04.173 "name": "raid_bdev1", 00:43:04.173 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:43:04.173 "strip_size_kb": 64, 00:43:04.173 "state": "online", 00:43:04.173 "raid_level": "raid5f", 00:43:04.173 "superblock": false, 00:43:04.173 "num_base_bdevs": 4, 00:43:04.173 "num_base_bdevs_discovered": 4, 00:43:04.173 "num_base_bdevs_operational": 4, 00:43:04.173 "process": { 00:43:04.174 "type": "rebuild", 00:43:04.174 "target": "spare", 00:43:04.174 "progress": { 00:43:04.174 "blocks": 193920, 00:43:04.174 "percent": 98 00:43:04.174 } 00:43:04.174 }, 00:43:04.174 "base_bdevs_list": [ 00:43:04.174 { 00:43:04.174 "name": "spare", 00:43:04.174 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:43:04.174 "is_configured": true, 00:43:04.174 "data_offset": 0, 00:43:04.174 "data_size": 65536 00:43:04.174 }, 00:43:04.174 { 00:43:04.174 "name": "BaseBdev2", 00:43:04.174 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:43:04.174 "is_configured": true, 00:43:04.174 "data_offset": 0, 00:43:04.174 "data_size": 65536 00:43:04.174 }, 00:43:04.174 { 00:43:04.174 "name": "BaseBdev3", 00:43:04.174 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:43:04.174 "is_configured": true, 00:43:04.174 "data_offset": 0, 00:43:04.174 "data_size": 65536 00:43:04.174 }, 00:43:04.174 { 00:43:04.174 "name": "BaseBdev4", 00:43:04.174 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:43:04.174 "is_configured": true, 00:43:04.174 "data_offset": 0, 00:43:04.174 "data_size": 65536 00:43:04.174 } 00:43:04.174 ] 00:43:04.174 }' 00:43:04.174 19:36:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:04.174 19:36:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:04.174 19:36:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:04.432 [2024-04-18 19:36:20.118255] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:43:04.432 [2024-04-18 19:36:20.118578] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:43:04.432 [2024-04-18 19:36:20.118737] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:04.432 19:36:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:04.432 19:36:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:05.368 19:36:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:05.368 19:36:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:05.368 19:36:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:05.368 19:36:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:05.368 19:36:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:05.368 19:36:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:05.368 19:36:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:05.368 19:36:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:05.627 "name": "raid_bdev1", 00:43:05.627 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:43:05.627 "strip_size_kb": 64, 00:43:05.627 "state": "online", 00:43:05.627 "raid_level": "raid5f", 00:43:05.627 "superblock": false, 00:43:05.627 "num_base_bdevs": 4, 00:43:05.627 "num_base_bdevs_discovered": 4, 00:43:05.627 "num_base_bdevs_operational": 4, 00:43:05.627 "base_bdevs_list": [ 00:43:05.627 { 00:43:05.627 "name": "spare", 00:43:05.627 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:43:05.627 "is_configured": true, 00:43:05.627 "data_offset": 0, 00:43:05.627 "data_size": 65536 00:43:05.627 }, 00:43:05.627 { 00:43:05.627 "name": "BaseBdev2", 00:43:05.627 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:43:05.627 "is_configured": true, 00:43:05.627 "data_offset": 0, 00:43:05.627 "data_size": 65536 00:43:05.627 }, 00:43:05.627 { 00:43:05.627 "name": "BaseBdev3", 00:43:05.627 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:43:05.627 "is_configured": true, 00:43:05.627 "data_offset": 0, 00:43:05.627 "data_size": 65536 00:43:05.627 }, 00:43:05.627 { 00:43:05.627 "name": "BaseBdev4", 00:43:05.627 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:43:05.627 "is_configured": true, 00:43:05.627 "data_offset": 0, 00:43:05.627 "data_size": 65536 00:43:05.627 } 00:43:05.627 ] 00:43:05.627 }' 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@660 -- # break 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:05.627 19:36:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:05.887 19:36:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:05.887 "name": "raid_bdev1", 00:43:05.887 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:43:05.887 "strip_size_kb": 64, 00:43:05.887 "state": "online", 00:43:05.887 "raid_level": "raid5f", 00:43:05.887 "superblock": false, 00:43:05.887 "num_base_bdevs": 4, 00:43:05.887 "num_base_bdevs_discovered": 4, 00:43:05.887 "num_base_bdevs_operational": 4, 00:43:05.887 "base_bdevs_list": [ 00:43:05.887 { 00:43:05.887 "name": "spare", 00:43:05.887 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:43:05.887 "is_configured": true, 00:43:05.887 "data_offset": 0, 00:43:05.887 "data_size": 65536 00:43:05.887 }, 00:43:05.887 { 00:43:05.887 "name": "BaseBdev2", 00:43:05.887 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:43:05.887 "is_configured": true, 00:43:05.887 "data_offset": 0, 00:43:05.887 "data_size": 65536 00:43:05.887 }, 00:43:05.887 { 00:43:05.887 "name": "BaseBdev3", 00:43:05.887 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:43:05.887 "is_configured": true, 00:43:05.887 "data_offset": 0, 00:43:05.887 "data_size": 65536 00:43:05.887 }, 00:43:05.887 { 00:43:05.887 "name": "BaseBdev4", 00:43:05.887 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:43:05.887 "is_configured": true, 00:43:05.887 "data_offset": 0, 00:43:05.887 "data_size": 65536 00:43:05.887 } 00:43:05.887 ] 00:43:05.887 }' 00:43:05.887 19:36:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:05.887 19:36:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:43:05.887 19:36:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:06.145 19:36:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:06.404 19:36:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:06.404 "name": "raid_bdev1", 00:43:06.404 "uuid": "d60acd4a-570e-4d74-9da2-ad7f824b7030", 00:43:06.404 "strip_size_kb": 64, 00:43:06.404 "state": "online", 00:43:06.404 "raid_level": "raid5f", 00:43:06.404 "superblock": false, 00:43:06.404 "num_base_bdevs": 4, 00:43:06.404 "num_base_bdevs_discovered": 4, 00:43:06.404 "num_base_bdevs_operational": 4, 00:43:06.404 "base_bdevs_list": [ 00:43:06.404 { 00:43:06.404 "name": "spare", 00:43:06.404 "uuid": "aa031ffd-a122-5579-a844-ab9b9833d88e", 00:43:06.404 "is_configured": true, 00:43:06.404 "data_offset": 0, 00:43:06.404 "data_size": 65536 00:43:06.404 }, 00:43:06.404 { 00:43:06.404 "name": "BaseBdev2", 00:43:06.404 "uuid": "994baad6-ecc1-49a4-86a0-1937fbe54da1", 00:43:06.404 "is_configured": true, 00:43:06.404 "data_offset": 0, 00:43:06.404 "data_size": 65536 00:43:06.404 }, 00:43:06.404 { 00:43:06.404 "name": "BaseBdev3", 00:43:06.404 "uuid": "28a99b15-980c-4332-9408-fb2f043aa7ac", 00:43:06.404 "is_configured": true, 00:43:06.404 "data_offset": 0, 00:43:06.404 "data_size": 65536 00:43:06.404 }, 00:43:06.404 { 00:43:06.404 "name": "BaseBdev4", 00:43:06.404 "uuid": "d51f6b8c-9a80-4ba9-b0c1-b4e6747699c6", 00:43:06.404 "is_configured": true, 00:43:06.404 "data_offset": 0, 00:43:06.404 "data_size": 65536 00:43:06.404 } 00:43:06.404 ] 00:43:06.404 }' 00:43:06.404 19:36:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:06.404 19:36:22 -- common/autotest_common.sh@10 -- # set +x 00:43:07.003 19:36:22 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:43:07.262 [2024-04-18 19:36:23.161093] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:07.262 [2024-04-18 19:36:23.161405] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:07.262 [2024-04-18 19:36:23.161676] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:07.262 [2024-04-18 19:36:23.161930] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:07.262 [2024-04-18 19:36:23.162036] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:43:07.262 19:36:23 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:07.262 19:36:23 -- bdev/bdev_raid.sh@671 -- # jq length 00:43:07.521 19:36:23 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:43:07.521 19:36:23 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:43:07.521 19:36:23 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:43:07.521 19:36:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:07.521 19:36:23 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:43:07.521 19:36:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:07.521 19:36:23 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:43:07.521 19:36:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:07.521 19:36:23 -- bdev/nbd_common.sh@12 -- # local i 00:43:07.521 19:36:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:07.521 19:36:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:07.521 19:36:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:43:07.780 /dev/nbd0 00:43:08.038 19:36:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:08.038 19:36:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:08.038 19:36:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:43:08.038 19:36:23 -- common/autotest_common.sh@855 -- # local i 00:43:08.038 19:36:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:43:08.038 19:36:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:43:08.038 19:36:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:43:08.038 19:36:23 -- common/autotest_common.sh@859 -- # break 00:43:08.038 19:36:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:43:08.038 19:36:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:43:08.038 19:36:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:08.038 1+0 records in 00:43:08.038 1+0 records out 00:43:08.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540809 s, 7.6 MB/s 00:43:08.038 19:36:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:08.038 19:36:23 -- common/autotest_common.sh@872 -- # size=4096 00:43:08.038 19:36:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:08.038 19:36:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:43:08.038 19:36:23 -- common/autotest_common.sh@875 -- # return 0 00:43:08.038 19:36:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:08.038 19:36:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:08.038 19:36:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:43:08.297 /dev/nbd1 00:43:08.297 19:36:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:08.297 19:36:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:08.297 19:36:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:43:08.297 19:36:24 -- common/autotest_common.sh@855 -- # local i 00:43:08.297 19:36:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:43:08.297 19:36:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:43:08.297 19:36:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:43:08.298 19:36:24 -- common/autotest_common.sh@859 -- # break 00:43:08.298 19:36:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:43:08.298 19:36:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:43:08.298 19:36:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:08.298 1+0 records in 00:43:08.298 1+0 records out 00:43:08.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608085 s, 6.7 MB/s 00:43:08.298 19:36:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:08.298 19:36:24 -- common/autotest_common.sh@872 -- # size=4096 00:43:08.298 19:36:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:08.298 19:36:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:43:08.298 19:36:24 -- common/autotest_common.sh@875 -- # return 0 00:43:08.298 19:36:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:08.298 19:36:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:08.298 19:36:24 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:43:08.557 19:36:24 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:43:08.557 19:36:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:08.557 19:36:24 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:08.557 19:36:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:08.557 19:36:24 -- bdev/nbd_common.sh@51 -- # local i 00:43:08.557 19:36:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:08.557 19:36:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@41 -- # break 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@45 -- # return 0 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:08.816 19:36:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:43:09.075 19:36:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:09.075 19:36:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:09.075 19:36:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:09.075 19:36:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:09.075 19:36:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:09.075 19:36:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:09.075 19:36:24 -- bdev/nbd_common.sh@41 -- # break 00:43:09.075 19:36:24 -- bdev/nbd_common.sh@45 -- # return 0 00:43:09.075 19:36:24 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:43:09.075 19:36:24 -- bdev/bdev_raid.sh@709 -- # killprocess 142242 00:43:09.075 19:36:24 -- common/autotest_common.sh@936 -- # '[' -z 142242 ']' 00:43:09.075 19:36:24 -- common/autotest_common.sh@940 -- # kill -0 142242 00:43:09.075 19:36:24 -- common/autotest_common.sh@941 -- # uname 00:43:09.075 19:36:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:43:09.075 19:36:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142242 00:43:09.075 killing process with pid 142242 00:43:09.075 Received shutdown signal, test time was about 60.000000 seconds 00:43:09.075 00:43:09.075 Latency(us) 00:43:09.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.075 =================================================================================================================== 00:43:09.075 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:09.075 19:36:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:43:09.075 19:36:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:43:09.075 19:36:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142242' 00:43:09.075 19:36:24 -- common/autotest_common.sh@955 -- # kill 142242 00:43:09.075 19:36:24 -- common/autotest_common.sh@960 -- # wait 142242 00:43:09.075 [2024-04-18 19:36:24.914282] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:09.643 [2024-04-18 19:36:25.490493] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:11.547 ************************************ 00:43:11.547 END TEST raid5f_rebuild_test 00:43:11.547 ************************************ 00:43:11.547 19:36:27 -- bdev/bdev_raid.sh@711 -- # return 0 00:43:11.547 00:43:11.547 real 0m28.623s 00:43:11.547 user 0m42.417s 00:43:11.547 sys 0m3.217s 00:43:11.547 19:36:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:11.547 19:36:27 -- common/autotest_common.sh@10 -- # set +x 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:43:11.548 19:36:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:43:11.548 19:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:11.548 19:36:27 -- common/autotest_common.sh@10 -- # set +x 00:43:11.548 ************************************ 00:43:11.548 START TEST raid5f_rebuild_test_sb 00:43:11.548 ************************************ 00:43:11.548 19:36:27 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 4 true false 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@544 -- # raid_pid=142921 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@545 -- # waitforlisten 142921 /var/tmp/spdk-raid.sock 00:43:11.548 19:36:27 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:43:11.548 19:36:27 -- common/autotest_common.sh@817 -- # '[' -z 142921 ']' 00:43:11.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:43:11.548 19:36:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:43:11.548 19:36:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:43:11.548 19:36:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:43:11.548 19:36:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:43:11.548 19:36:27 -- common/autotest_common.sh@10 -- # set +x 00:43:11.548 [2024-04-18 19:36:27.185029] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:43:11.548 [2024-04-18 19:36:27.185429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142921 ] 00:43:11.548 I/O size of 3145728 is greater than zero copy threshold (65536). 00:43:11.548 Zero copy mechanism will not be used. 00:43:11.548 [2024-04-18 19:36:27.364367] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:11.807 [2024-04-18 19:36:27.668607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:12.065 [2024-04-18 19:36:27.939095] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:12.323 19:36:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:43:12.323 19:36:28 -- common/autotest_common.sh@850 -- # return 0 00:43:12.323 19:36:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:43:12.323 19:36:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:43:12.323 19:36:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:43:12.581 BaseBdev1_malloc 00:43:12.581 19:36:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:13.149 [2024-04-18 19:36:28.780450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:13.149 [2024-04-18 19:36:28.780799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:13.149 [2024-04-18 19:36:28.780941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:43:13.149 [2024-04-18 19:36:28.781129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:13.149 [2024-04-18 19:36:28.783927] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:13.149 [2024-04-18 19:36:28.784148] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:13.149 BaseBdev1 00:43:13.149 19:36:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:43:13.149 19:36:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:43:13.149 19:36:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:43:13.149 BaseBdev2_malloc 00:43:13.408 19:36:29 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:43:13.408 [2024-04-18 19:36:29.296539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:43:13.408 [2024-04-18 19:36:29.296808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:13.408 [2024-04-18 19:36:29.296895] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:43:13.408 [2024-04-18 19:36:29.297103] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:13.408 [2024-04-18 19:36:29.299783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:13.408 [2024-04-18 19:36:29.299963] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:43:13.408 BaseBdev2 00:43:13.408 19:36:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:43:13.408 19:36:29 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:43:13.408 19:36:29 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:43:13.975 BaseBdev3_malloc 00:43:13.975 19:36:29 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:43:14.234 [2024-04-18 19:36:29.912063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:43:14.234 [2024-04-18 19:36:29.912514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:14.234 [2024-04-18 19:36:29.912755] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:43:14.234 [2024-04-18 19:36:29.912974] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:14.234 [2024-04-18 19:36:29.916681] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:14.234 [2024-04-18 19:36:29.916968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:43:14.234 BaseBdev3 00:43:14.234 19:36:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:43:14.234 19:36:29 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:43:14.234 19:36:29 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:43:14.493 BaseBdev4_malloc 00:43:14.493 19:36:30 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:43:14.751 [2024-04-18 19:36:30.494803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:43:14.751 [2024-04-18 19:36:30.495218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:14.751 [2024-04-18 19:36:30.495352] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:43:14.751 [2024-04-18 19:36:30.495507] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:14.751 [2024-04-18 19:36:30.498264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:14.751 [2024-04-18 19:36:30.498469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:43:14.751 BaseBdev4 00:43:14.751 19:36:30 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:43:15.009 spare_malloc 00:43:15.009 19:36:30 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:43:15.575 spare_delay 00:43:15.575 19:36:31 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:43:15.575 [2024-04-18 19:36:31.463932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:15.575 [2024-04-18 19:36:31.464242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:15.575 [2024-04-18 19:36:31.464314] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:43:15.575 [2024-04-18 19:36:31.464479] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:15.575 [2024-04-18 19:36:31.467168] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:15.575 [2024-04-18 19:36:31.467349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:15.575 spare 00:43:15.575 19:36:31 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:43:15.834 [2024-04-18 19:36:31.696228] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:15.834 [2024-04-18 19:36:31.698723] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:15.834 [2024-04-18 19:36:31.699044] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:15.834 [2024-04-18 19:36:31.699185] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:43:15.834 [2024-04-18 19:36:31.699599] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:43:15.834 [2024-04-18 19:36:31.699729] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:43:15.834 [2024-04-18 19:36:31.699949] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:43:15.834 [2024-04-18 19:36:31.709717] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:43:15.834 [2024-04-18 19:36:31.709954] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:43:15.834 [2024-04-18 19:36:31.710277] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:15.834 19:36:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:16.093 19:36:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:16.093 "name": "raid_bdev1", 00:43:16.093 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:16.093 "strip_size_kb": 64, 00:43:16.093 "state": "online", 00:43:16.093 "raid_level": "raid5f", 00:43:16.093 "superblock": true, 00:43:16.093 "num_base_bdevs": 4, 00:43:16.093 "num_base_bdevs_discovered": 4, 00:43:16.093 "num_base_bdevs_operational": 4, 00:43:16.093 "base_bdevs_list": [ 00:43:16.093 { 00:43:16.093 "name": "BaseBdev1", 00:43:16.093 "uuid": "48a944e2-c925-59d9-a780-72111e9593be", 00:43:16.093 "is_configured": true, 00:43:16.093 "data_offset": 2048, 00:43:16.093 "data_size": 63488 00:43:16.093 }, 00:43:16.093 { 00:43:16.093 "name": "BaseBdev2", 00:43:16.093 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:16.093 "is_configured": true, 00:43:16.093 "data_offset": 2048, 00:43:16.093 "data_size": 63488 00:43:16.093 }, 00:43:16.093 { 00:43:16.093 "name": "BaseBdev3", 00:43:16.093 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:16.093 "is_configured": true, 00:43:16.093 "data_offset": 2048, 00:43:16.093 "data_size": 63488 00:43:16.093 }, 00:43:16.093 { 00:43:16.093 "name": "BaseBdev4", 00:43:16.093 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:16.093 "is_configured": true, 00:43:16.093 "data_offset": 2048, 00:43:16.093 "data_size": 63488 00:43:16.093 } 00:43:16.093 ] 00:43:16.093 }' 00:43:16.093 19:36:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:16.093 19:36:31 -- common/autotest_common.sh@10 -- # set +x 00:43:17.027 19:36:32 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:43:17.027 19:36:32 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:43:17.027 [2024-04-18 19:36:32.913095] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:17.027 19:36:32 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:43:17.027 19:36:32 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:17.027 19:36:32 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:43:17.286 19:36:33 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:43:17.286 19:36:33 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:43:17.286 19:36:33 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:43:17.286 19:36:33 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:43:17.286 19:36:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:17.286 19:36:33 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:43:17.286 19:36:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:17.286 19:36:33 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:43:17.286 19:36:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:17.286 19:36:33 -- bdev/nbd_common.sh@12 -- # local i 00:43:17.286 19:36:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:17.286 19:36:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:17.286 19:36:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:43:17.543 [2024-04-18 19:36:33.357019] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:43:17.543 /dev/nbd0 00:43:17.543 19:36:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:17.543 19:36:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:17.543 19:36:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:43:17.543 19:36:33 -- common/autotest_common.sh@855 -- # local i 00:43:17.543 19:36:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:43:17.543 19:36:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:43:17.543 19:36:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:43:17.543 19:36:33 -- common/autotest_common.sh@859 -- # break 00:43:17.543 19:36:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:43:17.543 19:36:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:43:17.543 19:36:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:17.543 1+0 records in 00:43:17.543 1+0 records out 00:43:17.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478211 s, 8.6 MB/s 00:43:17.543 19:36:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:17.543 19:36:33 -- common/autotest_common.sh@872 -- # size=4096 00:43:17.543 19:36:33 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:17.543 19:36:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:43:17.543 19:36:33 -- common/autotest_common.sh@875 -- # return 0 00:43:17.543 19:36:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:17.543 19:36:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:17.543 19:36:33 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:43:17.543 19:36:33 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:43:17.543 19:36:33 -- bdev/bdev_raid.sh@582 -- # echo 192 00:43:17.543 19:36:33 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:43:18.480 496+0 records in 00:43:18.480 496+0 records out 00:43:18.480 97517568 bytes (98 MB, 93 MiB) copied, 0.630051 s, 155 MB/s 00:43:18.480 19:36:34 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@51 -- # local i 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:18.480 [2024-04-18 19:36:34.374774] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@41 -- # break 00:43:18.480 19:36:34 -- bdev/nbd_common.sh@45 -- # return 0 00:43:18.480 19:36:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:43:18.738 [2024-04-18 19:36:34.620096] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:18.738 19:36:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:19.014 19:36:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:19.014 "name": "raid_bdev1", 00:43:19.014 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:19.014 "strip_size_kb": 64, 00:43:19.014 "state": "online", 00:43:19.014 "raid_level": "raid5f", 00:43:19.014 "superblock": true, 00:43:19.014 "num_base_bdevs": 4, 00:43:19.014 "num_base_bdevs_discovered": 3, 00:43:19.014 "num_base_bdevs_operational": 3, 00:43:19.015 "base_bdevs_list": [ 00:43:19.015 { 00:43:19.015 "name": null, 00:43:19.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:19.015 "is_configured": false, 00:43:19.015 "data_offset": 2048, 00:43:19.015 "data_size": 63488 00:43:19.015 }, 00:43:19.015 { 00:43:19.015 "name": "BaseBdev2", 00:43:19.015 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:19.015 "is_configured": true, 00:43:19.015 "data_offset": 2048, 00:43:19.015 "data_size": 63488 00:43:19.015 }, 00:43:19.015 { 00:43:19.015 "name": "BaseBdev3", 00:43:19.015 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:19.015 "is_configured": true, 00:43:19.015 "data_offset": 2048, 00:43:19.015 "data_size": 63488 00:43:19.015 }, 00:43:19.015 { 00:43:19.015 "name": "BaseBdev4", 00:43:19.015 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:19.015 "is_configured": true, 00:43:19.015 "data_offset": 2048, 00:43:19.015 "data_size": 63488 00:43:19.015 } 00:43:19.015 ] 00:43:19.015 }' 00:43:19.015 19:36:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:19.015 19:36:34 -- common/autotest_common.sh@10 -- # set +x 00:43:19.582 19:36:35 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:43:19.844 [2024-04-18 19:36:35.672282] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:43:19.844 [2024-04-18 19:36:35.672348] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:19.844 [2024-04-18 19:36:35.690653] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bd00 00:43:19.844 [2024-04-18 19:36:35.702257] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:19.844 19:36:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:43:20.789 19:36:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:20.789 19:36:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:20.789 19:36:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:20.789 19:36:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:20.789 19:36:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:20.789 19:36:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:20.789 19:36:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:21.355 19:36:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:21.355 "name": "raid_bdev1", 00:43:21.355 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:21.355 "strip_size_kb": 64, 00:43:21.355 "state": "online", 00:43:21.356 "raid_level": "raid5f", 00:43:21.356 "superblock": true, 00:43:21.356 "num_base_bdevs": 4, 00:43:21.356 "num_base_bdevs_discovered": 4, 00:43:21.356 "num_base_bdevs_operational": 4, 00:43:21.356 "process": { 00:43:21.356 "type": "rebuild", 00:43:21.356 "target": "spare", 00:43:21.356 "progress": { 00:43:21.356 "blocks": 23040, 00:43:21.356 "percent": 12 00:43:21.356 } 00:43:21.356 }, 00:43:21.356 "base_bdevs_list": [ 00:43:21.356 { 00:43:21.356 "name": "spare", 00:43:21.356 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:21.356 "is_configured": true, 00:43:21.356 "data_offset": 2048, 00:43:21.356 "data_size": 63488 00:43:21.356 }, 00:43:21.356 { 00:43:21.356 "name": "BaseBdev2", 00:43:21.356 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:21.356 "is_configured": true, 00:43:21.356 "data_offset": 2048, 00:43:21.356 "data_size": 63488 00:43:21.356 }, 00:43:21.356 { 00:43:21.356 "name": "BaseBdev3", 00:43:21.356 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:21.356 "is_configured": true, 00:43:21.356 "data_offset": 2048, 00:43:21.356 "data_size": 63488 00:43:21.356 }, 00:43:21.356 { 00:43:21.356 "name": "BaseBdev4", 00:43:21.356 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:21.356 "is_configured": true, 00:43:21.356 "data_offset": 2048, 00:43:21.356 "data_size": 63488 00:43:21.356 } 00:43:21.356 ] 00:43:21.356 }' 00:43:21.356 19:36:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:21.356 19:36:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:21.356 19:36:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:21.356 19:36:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:21.356 19:36:37 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:43:21.614 [2024-04-18 19:36:37.375668] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:21.614 [2024-04-18 19:36:37.417411] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:21.614 [2024-04-18 19:36:37.417500] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:21.614 19:36:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:21.873 19:36:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:21.873 "name": "raid_bdev1", 00:43:21.873 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:21.873 "strip_size_kb": 64, 00:43:21.873 "state": "online", 00:43:21.873 "raid_level": "raid5f", 00:43:21.873 "superblock": true, 00:43:21.873 "num_base_bdevs": 4, 00:43:21.873 "num_base_bdevs_discovered": 3, 00:43:21.873 "num_base_bdevs_operational": 3, 00:43:21.873 "base_bdevs_list": [ 00:43:21.873 { 00:43:21.873 "name": null, 00:43:21.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:21.873 "is_configured": false, 00:43:21.873 "data_offset": 2048, 00:43:21.873 "data_size": 63488 00:43:21.873 }, 00:43:21.873 { 00:43:21.873 "name": "BaseBdev2", 00:43:21.873 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:21.873 "is_configured": true, 00:43:21.873 "data_offset": 2048, 00:43:21.873 "data_size": 63488 00:43:21.873 }, 00:43:21.873 { 00:43:21.873 "name": "BaseBdev3", 00:43:21.873 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:21.873 "is_configured": true, 00:43:21.873 "data_offset": 2048, 00:43:21.873 "data_size": 63488 00:43:21.873 }, 00:43:21.873 { 00:43:21.873 "name": "BaseBdev4", 00:43:21.873 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:21.873 "is_configured": true, 00:43:21.873 "data_offset": 2048, 00:43:21.873 "data_size": 63488 00:43:21.873 } 00:43:21.873 ] 00:43:21.873 }' 00:43:21.873 19:36:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:21.873 19:36:37 -- common/autotest_common.sh@10 -- # set +x 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:22.807 "name": "raid_bdev1", 00:43:22.807 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:22.807 "strip_size_kb": 64, 00:43:22.807 "state": "online", 00:43:22.807 "raid_level": "raid5f", 00:43:22.807 "superblock": true, 00:43:22.807 "num_base_bdevs": 4, 00:43:22.807 "num_base_bdevs_discovered": 3, 00:43:22.807 "num_base_bdevs_operational": 3, 00:43:22.807 "base_bdevs_list": [ 00:43:22.807 { 00:43:22.807 "name": null, 00:43:22.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:22.807 "is_configured": false, 00:43:22.807 "data_offset": 2048, 00:43:22.807 "data_size": 63488 00:43:22.807 }, 00:43:22.807 { 00:43:22.807 "name": "BaseBdev2", 00:43:22.807 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:22.807 "is_configured": true, 00:43:22.807 "data_offset": 2048, 00:43:22.807 "data_size": 63488 00:43:22.807 }, 00:43:22.807 { 00:43:22.807 "name": "BaseBdev3", 00:43:22.807 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:22.807 "is_configured": true, 00:43:22.807 "data_offset": 2048, 00:43:22.807 "data_size": 63488 00:43:22.807 }, 00:43:22.807 { 00:43:22.807 "name": "BaseBdev4", 00:43:22.807 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:22.807 "is_configured": true, 00:43:22.807 "data_offset": 2048, 00:43:22.807 "data_size": 63488 00:43:22.807 } 00:43:22.807 ] 00:43:22.807 }' 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:43:22.807 19:36:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:43:23.065 [2024-04-18 19:36:38.926888] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:43:23.065 [2024-04-18 19:36:38.926945] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:23.065 [2024-04-18 19:36:38.943647] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bea0 00:43:23.065 [2024-04-18 19:36:38.954354] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:23.065 19:36:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:43:24.441 19:36:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:24.441 19:36:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:24.441 19:36:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:24.441 19:36:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:24.441 19:36:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:24.441 19:36:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:24.441 19:36:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:24.441 "name": "raid_bdev1", 00:43:24.441 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:24.441 "strip_size_kb": 64, 00:43:24.441 "state": "online", 00:43:24.441 "raid_level": "raid5f", 00:43:24.441 "superblock": true, 00:43:24.441 "num_base_bdevs": 4, 00:43:24.441 "num_base_bdevs_discovered": 4, 00:43:24.441 "num_base_bdevs_operational": 4, 00:43:24.441 "process": { 00:43:24.441 "type": "rebuild", 00:43:24.441 "target": "spare", 00:43:24.441 "progress": { 00:43:24.441 "blocks": 21120, 00:43:24.441 "percent": 11 00:43:24.441 } 00:43:24.441 }, 00:43:24.441 "base_bdevs_list": [ 00:43:24.441 { 00:43:24.441 "name": "spare", 00:43:24.441 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:24.441 "is_configured": true, 00:43:24.441 "data_offset": 2048, 00:43:24.441 "data_size": 63488 00:43:24.441 }, 00:43:24.441 { 00:43:24.441 "name": "BaseBdev2", 00:43:24.441 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:24.441 "is_configured": true, 00:43:24.441 "data_offset": 2048, 00:43:24.441 "data_size": 63488 00:43:24.441 }, 00:43:24.441 { 00:43:24.441 "name": "BaseBdev3", 00:43:24.441 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:24.441 "is_configured": true, 00:43:24.441 "data_offset": 2048, 00:43:24.441 "data_size": 63488 00:43:24.441 }, 00:43:24.441 { 00:43:24.441 "name": "BaseBdev4", 00:43:24.441 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:24.441 "is_configured": true, 00:43:24.441 "data_offset": 2048, 00:43:24.441 "data_size": 63488 00:43:24.441 } 00:43:24.441 ] 00:43:24.441 }' 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:43:24.441 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@657 -- # local timeout=845 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:24.441 19:36:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:24.700 19:36:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:24.700 "name": "raid_bdev1", 00:43:24.700 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:24.700 "strip_size_kb": 64, 00:43:24.700 "state": "online", 00:43:24.700 "raid_level": "raid5f", 00:43:24.700 "superblock": true, 00:43:24.700 "num_base_bdevs": 4, 00:43:24.700 "num_base_bdevs_discovered": 4, 00:43:24.700 "num_base_bdevs_operational": 4, 00:43:24.700 "process": { 00:43:24.700 "type": "rebuild", 00:43:24.700 "target": "spare", 00:43:24.700 "progress": { 00:43:24.700 "blocks": 26880, 00:43:24.700 "percent": 14 00:43:24.700 } 00:43:24.700 }, 00:43:24.700 "base_bdevs_list": [ 00:43:24.700 { 00:43:24.700 "name": "spare", 00:43:24.700 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:24.700 "is_configured": true, 00:43:24.700 "data_offset": 2048, 00:43:24.700 "data_size": 63488 00:43:24.700 }, 00:43:24.700 { 00:43:24.700 "name": "BaseBdev2", 00:43:24.700 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:24.700 "is_configured": true, 00:43:24.700 "data_offset": 2048, 00:43:24.700 "data_size": 63488 00:43:24.700 }, 00:43:24.700 { 00:43:24.700 "name": "BaseBdev3", 00:43:24.700 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:24.700 "is_configured": true, 00:43:24.700 "data_offset": 2048, 00:43:24.700 "data_size": 63488 00:43:24.700 }, 00:43:24.700 { 00:43:24.700 "name": "BaseBdev4", 00:43:24.700 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:24.700 "is_configured": true, 00:43:24.700 "data_offset": 2048, 00:43:24.700 "data_size": 63488 00:43:24.700 } 00:43:24.700 ] 00:43:24.700 }' 00:43:24.700 19:36:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:24.700 19:36:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:24.700 19:36:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:24.700 19:36:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:24.700 19:36:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:26.073 19:36:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:26.073 19:36:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:26.073 19:36:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:26.073 19:36:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:26.073 19:36:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:26.074 19:36:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:26.074 19:36:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:26.074 19:36:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:26.074 19:36:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:26.074 "name": "raid_bdev1", 00:43:26.074 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:26.074 "strip_size_kb": 64, 00:43:26.074 "state": "online", 00:43:26.074 "raid_level": "raid5f", 00:43:26.074 "superblock": true, 00:43:26.074 "num_base_bdevs": 4, 00:43:26.074 "num_base_bdevs_discovered": 4, 00:43:26.074 "num_base_bdevs_operational": 4, 00:43:26.074 "process": { 00:43:26.074 "type": "rebuild", 00:43:26.074 "target": "spare", 00:43:26.074 "progress": { 00:43:26.074 "blocks": 55680, 00:43:26.074 "percent": 29 00:43:26.074 } 00:43:26.074 }, 00:43:26.074 "base_bdevs_list": [ 00:43:26.074 { 00:43:26.074 "name": "spare", 00:43:26.074 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:26.074 "is_configured": true, 00:43:26.074 "data_offset": 2048, 00:43:26.074 "data_size": 63488 00:43:26.074 }, 00:43:26.074 { 00:43:26.074 "name": "BaseBdev2", 00:43:26.074 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:26.074 "is_configured": true, 00:43:26.074 "data_offset": 2048, 00:43:26.074 "data_size": 63488 00:43:26.074 }, 00:43:26.074 { 00:43:26.074 "name": "BaseBdev3", 00:43:26.074 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:26.074 "is_configured": true, 00:43:26.074 "data_offset": 2048, 00:43:26.074 "data_size": 63488 00:43:26.074 }, 00:43:26.074 { 00:43:26.074 "name": "BaseBdev4", 00:43:26.074 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:26.074 "is_configured": true, 00:43:26.074 "data_offset": 2048, 00:43:26.074 "data_size": 63488 00:43:26.074 } 00:43:26.074 ] 00:43:26.074 }' 00:43:26.074 19:36:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:26.074 19:36:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:26.074 19:36:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:26.332 19:36:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:26.332 19:36:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:27.331 19:36:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:27.331 19:36:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:27.331 19:36:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:27.331 19:36:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:27.331 19:36:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:27.331 19:36:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:27.331 19:36:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:27.331 19:36:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:27.589 19:36:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:27.589 "name": "raid_bdev1", 00:43:27.589 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:27.589 "strip_size_kb": 64, 00:43:27.589 "state": "online", 00:43:27.589 "raid_level": "raid5f", 00:43:27.589 "superblock": true, 00:43:27.589 "num_base_bdevs": 4, 00:43:27.589 "num_base_bdevs_discovered": 4, 00:43:27.589 "num_base_bdevs_operational": 4, 00:43:27.589 "process": { 00:43:27.589 "type": "rebuild", 00:43:27.589 "target": "spare", 00:43:27.589 "progress": { 00:43:27.589 "blocks": 82560, 00:43:27.589 "percent": 43 00:43:27.589 } 00:43:27.589 }, 00:43:27.589 "base_bdevs_list": [ 00:43:27.589 { 00:43:27.589 "name": "spare", 00:43:27.589 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:27.589 "is_configured": true, 00:43:27.589 "data_offset": 2048, 00:43:27.589 "data_size": 63488 00:43:27.589 }, 00:43:27.589 { 00:43:27.589 "name": "BaseBdev2", 00:43:27.589 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:27.589 "is_configured": true, 00:43:27.589 "data_offset": 2048, 00:43:27.589 "data_size": 63488 00:43:27.589 }, 00:43:27.589 { 00:43:27.589 "name": "BaseBdev3", 00:43:27.589 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:27.589 "is_configured": true, 00:43:27.589 "data_offset": 2048, 00:43:27.589 "data_size": 63488 00:43:27.589 }, 00:43:27.589 { 00:43:27.589 "name": "BaseBdev4", 00:43:27.589 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:27.589 "is_configured": true, 00:43:27.589 "data_offset": 2048, 00:43:27.589 "data_size": 63488 00:43:27.589 } 00:43:27.589 ] 00:43:27.589 }' 00:43:27.590 19:36:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:27.590 19:36:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:27.590 19:36:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:27.590 19:36:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:27.590 19:36:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:28.527 19:36:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:28.527 19:36:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:28.527 19:36:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:28.527 19:36:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:28.527 19:36:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:28.527 19:36:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:28.527 19:36:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:28.527 19:36:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:29.093 19:36:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:29.093 "name": "raid_bdev1", 00:43:29.093 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:29.094 "strip_size_kb": 64, 00:43:29.094 "state": "online", 00:43:29.094 "raid_level": "raid5f", 00:43:29.094 "superblock": true, 00:43:29.094 "num_base_bdevs": 4, 00:43:29.094 "num_base_bdevs_discovered": 4, 00:43:29.094 "num_base_bdevs_operational": 4, 00:43:29.094 "process": { 00:43:29.094 "type": "rebuild", 00:43:29.094 "target": "spare", 00:43:29.094 "progress": { 00:43:29.094 "blocks": 109440, 00:43:29.094 "percent": 57 00:43:29.094 } 00:43:29.094 }, 00:43:29.094 "base_bdevs_list": [ 00:43:29.094 { 00:43:29.094 "name": "spare", 00:43:29.094 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:29.094 "is_configured": true, 00:43:29.094 "data_offset": 2048, 00:43:29.094 "data_size": 63488 00:43:29.094 }, 00:43:29.094 { 00:43:29.094 "name": "BaseBdev2", 00:43:29.094 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:29.094 "is_configured": true, 00:43:29.094 "data_offset": 2048, 00:43:29.094 "data_size": 63488 00:43:29.094 }, 00:43:29.094 { 00:43:29.094 "name": "BaseBdev3", 00:43:29.094 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:29.094 "is_configured": true, 00:43:29.094 "data_offset": 2048, 00:43:29.094 "data_size": 63488 00:43:29.094 }, 00:43:29.094 { 00:43:29.094 "name": "BaseBdev4", 00:43:29.094 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:29.094 "is_configured": true, 00:43:29.094 "data_offset": 2048, 00:43:29.094 "data_size": 63488 00:43:29.094 } 00:43:29.094 ] 00:43:29.094 }' 00:43:29.094 19:36:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:29.094 19:36:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:29.094 19:36:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:29.094 19:36:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:29.094 19:36:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:30.056 19:36:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:30.056 19:36:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:30.056 19:36:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:30.056 19:36:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:30.056 19:36:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:30.056 19:36:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:30.056 19:36:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:30.056 19:36:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:30.314 19:36:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:30.314 "name": "raid_bdev1", 00:43:30.314 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:30.314 "strip_size_kb": 64, 00:43:30.314 "state": "online", 00:43:30.314 "raid_level": "raid5f", 00:43:30.314 "superblock": true, 00:43:30.314 "num_base_bdevs": 4, 00:43:30.314 "num_base_bdevs_discovered": 4, 00:43:30.314 "num_base_bdevs_operational": 4, 00:43:30.314 "process": { 00:43:30.314 "type": "rebuild", 00:43:30.314 "target": "spare", 00:43:30.314 "progress": { 00:43:30.314 "blocks": 134400, 00:43:30.314 "percent": 70 00:43:30.314 } 00:43:30.314 }, 00:43:30.314 "base_bdevs_list": [ 00:43:30.314 { 00:43:30.314 "name": "spare", 00:43:30.314 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:30.314 "is_configured": true, 00:43:30.314 "data_offset": 2048, 00:43:30.314 "data_size": 63488 00:43:30.314 }, 00:43:30.314 { 00:43:30.314 "name": "BaseBdev2", 00:43:30.314 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:30.314 "is_configured": true, 00:43:30.314 "data_offset": 2048, 00:43:30.314 "data_size": 63488 00:43:30.314 }, 00:43:30.314 { 00:43:30.314 "name": "BaseBdev3", 00:43:30.314 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:30.314 "is_configured": true, 00:43:30.314 "data_offset": 2048, 00:43:30.314 "data_size": 63488 00:43:30.314 }, 00:43:30.314 { 00:43:30.314 "name": "BaseBdev4", 00:43:30.314 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:30.314 "is_configured": true, 00:43:30.314 "data_offset": 2048, 00:43:30.314 "data_size": 63488 00:43:30.314 } 00:43:30.314 ] 00:43:30.314 }' 00:43:30.314 19:36:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:30.314 19:36:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:30.314 19:36:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:30.314 19:36:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:30.314 19:36:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:31.686 "name": "raid_bdev1", 00:43:31.686 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:31.686 "strip_size_kb": 64, 00:43:31.686 "state": "online", 00:43:31.686 "raid_level": "raid5f", 00:43:31.686 "superblock": true, 00:43:31.686 "num_base_bdevs": 4, 00:43:31.686 "num_base_bdevs_discovered": 4, 00:43:31.686 "num_base_bdevs_operational": 4, 00:43:31.686 "process": { 00:43:31.686 "type": "rebuild", 00:43:31.686 "target": "spare", 00:43:31.686 "progress": { 00:43:31.686 "blocks": 161280, 00:43:31.686 "percent": 84 00:43:31.686 } 00:43:31.686 }, 00:43:31.686 "base_bdevs_list": [ 00:43:31.686 { 00:43:31.686 "name": "spare", 00:43:31.686 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:31.686 "is_configured": true, 00:43:31.686 "data_offset": 2048, 00:43:31.686 "data_size": 63488 00:43:31.686 }, 00:43:31.686 { 00:43:31.686 "name": "BaseBdev2", 00:43:31.686 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:31.686 "is_configured": true, 00:43:31.686 "data_offset": 2048, 00:43:31.686 "data_size": 63488 00:43:31.686 }, 00:43:31.686 { 00:43:31.686 "name": "BaseBdev3", 00:43:31.686 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:31.686 "is_configured": true, 00:43:31.686 "data_offset": 2048, 00:43:31.686 "data_size": 63488 00:43:31.686 }, 00:43:31.686 { 00:43:31.686 "name": "BaseBdev4", 00:43:31.686 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:31.686 "is_configured": true, 00:43:31.686 "data_offset": 2048, 00:43:31.686 "data_size": 63488 00:43:31.686 } 00:43:31.686 ] 00:43:31.686 }' 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:31.686 19:36:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:31.687 19:36:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:31.687 19:36:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:31.687 19:36:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:33.061 "name": "raid_bdev1", 00:43:33.061 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:33.061 "strip_size_kb": 64, 00:43:33.061 "state": "online", 00:43:33.061 "raid_level": "raid5f", 00:43:33.061 "superblock": true, 00:43:33.061 "num_base_bdevs": 4, 00:43:33.061 "num_base_bdevs_discovered": 4, 00:43:33.061 "num_base_bdevs_operational": 4, 00:43:33.061 "process": { 00:43:33.061 "type": "rebuild", 00:43:33.061 "target": "spare", 00:43:33.061 "progress": { 00:43:33.061 "blocks": 188160, 00:43:33.061 "percent": 98 00:43:33.061 } 00:43:33.061 }, 00:43:33.061 "base_bdevs_list": [ 00:43:33.061 { 00:43:33.061 "name": "spare", 00:43:33.061 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:33.061 "is_configured": true, 00:43:33.061 "data_offset": 2048, 00:43:33.061 "data_size": 63488 00:43:33.061 }, 00:43:33.061 { 00:43:33.061 "name": "BaseBdev2", 00:43:33.061 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:33.061 "is_configured": true, 00:43:33.061 "data_offset": 2048, 00:43:33.061 "data_size": 63488 00:43:33.061 }, 00:43:33.061 { 00:43:33.061 "name": "BaseBdev3", 00:43:33.061 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:33.061 "is_configured": true, 00:43:33.061 "data_offset": 2048, 00:43:33.061 "data_size": 63488 00:43:33.061 }, 00:43:33.061 { 00:43:33.061 "name": "BaseBdev4", 00:43:33.061 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:33.061 "is_configured": true, 00:43:33.061 "data_offset": 2048, 00:43:33.061 "data_size": 63488 00:43:33.061 } 00:43:33.061 ] 00:43:33.061 }' 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:33.061 19:36:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:33.319 19:36:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:33.319 19:36:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:33.319 [2024-04-18 19:36:49.040370] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:43:33.319 [2024-04-18 19:36:49.040451] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:43:33.319 [2024-04-18 19:36:49.040649] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:34.253 19:36:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:34.253 19:36:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:34.253 19:36:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:34.253 19:36:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:34.253 19:36:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:34.253 19:36:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:34.253 19:36:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:34.253 19:36:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:34.511 19:36:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:34.511 "name": "raid_bdev1", 00:43:34.511 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:34.511 "strip_size_kb": 64, 00:43:34.511 "state": "online", 00:43:34.511 "raid_level": "raid5f", 00:43:34.511 "superblock": true, 00:43:34.511 "num_base_bdevs": 4, 00:43:34.511 "num_base_bdevs_discovered": 4, 00:43:34.511 "num_base_bdevs_operational": 4, 00:43:34.511 "base_bdevs_list": [ 00:43:34.511 { 00:43:34.511 "name": "spare", 00:43:34.511 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:34.511 "is_configured": true, 00:43:34.511 "data_offset": 2048, 00:43:34.511 "data_size": 63488 00:43:34.511 }, 00:43:34.511 { 00:43:34.511 "name": "BaseBdev2", 00:43:34.511 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:34.511 "is_configured": true, 00:43:34.511 "data_offset": 2048, 00:43:34.511 "data_size": 63488 00:43:34.511 }, 00:43:34.511 { 00:43:34.511 "name": "BaseBdev3", 00:43:34.511 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:34.511 "is_configured": true, 00:43:34.511 "data_offset": 2048, 00:43:34.511 "data_size": 63488 00:43:34.511 }, 00:43:34.511 { 00:43:34.511 "name": "BaseBdev4", 00:43:34.511 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:34.511 "is_configured": true, 00:43:34.511 "data_offset": 2048, 00:43:34.511 "data_size": 63488 00:43:34.511 } 00:43:34.511 ] 00:43:34.511 }' 00:43:34.511 19:36:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:34.511 19:36:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:43:34.511 19:36:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:34.768 19:36:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:43:34.768 19:36:50 -- bdev/bdev_raid.sh@660 -- # break 00:43:34.768 19:36:50 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:34.768 19:36:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:34.768 19:36:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:43:34.768 19:36:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:43:34.768 19:36:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:34.768 19:36:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:34.768 19:36:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:35.026 "name": "raid_bdev1", 00:43:35.026 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:35.026 "strip_size_kb": 64, 00:43:35.026 "state": "online", 00:43:35.026 "raid_level": "raid5f", 00:43:35.026 "superblock": true, 00:43:35.026 "num_base_bdevs": 4, 00:43:35.026 "num_base_bdevs_discovered": 4, 00:43:35.026 "num_base_bdevs_operational": 4, 00:43:35.026 "base_bdevs_list": [ 00:43:35.026 { 00:43:35.026 "name": "spare", 00:43:35.026 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:35.026 "is_configured": true, 00:43:35.026 "data_offset": 2048, 00:43:35.026 "data_size": 63488 00:43:35.026 }, 00:43:35.026 { 00:43:35.026 "name": "BaseBdev2", 00:43:35.026 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:35.026 "is_configured": true, 00:43:35.026 "data_offset": 2048, 00:43:35.026 "data_size": 63488 00:43:35.026 }, 00:43:35.026 { 00:43:35.026 "name": "BaseBdev3", 00:43:35.026 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:35.026 "is_configured": true, 00:43:35.026 "data_offset": 2048, 00:43:35.026 "data_size": 63488 00:43:35.026 }, 00:43:35.026 { 00:43:35.026 "name": "BaseBdev4", 00:43:35.026 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:35.026 "is_configured": true, 00:43:35.026 "data_offset": 2048, 00:43:35.026 "data_size": 63488 00:43:35.026 } 00:43:35.026 ] 00:43:35.026 }' 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:35.026 19:36:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:35.284 19:36:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:35.284 "name": "raid_bdev1", 00:43:35.284 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:35.284 "strip_size_kb": 64, 00:43:35.284 "state": "online", 00:43:35.284 "raid_level": "raid5f", 00:43:35.284 "superblock": true, 00:43:35.284 "num_base_bdevs": 4, 00:43:35.284 "num_base_bdevs_discovered": 4, 00:43:35.284 "num_base_bdevs_operational": 4, 00:43:35.284 "base_bdevs_list": [ 00:43:35.284 { 00:43:35.284 "name": "spare", 00:43:35.284 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:35.284 "is_configured": true, 00:43:35.284 "data_offset": 2048, 00:43:35.284 "data_size": 63488 00:43:35.284 }, 00:43:35.284 { 00:43:35.284 "name": "BaseBdev2", 00:43:35.284 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:35.284 "is_configured": true, 00:43:35.284 "data_offset": 2048, 00:43:35.284 "data_size": 63488 00:43:35.284 }, 00:43:35.284 { 00:43:35.284 "name": "BaseBdev3", 00:43:35.284 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:35.284 "is_configured": true, 00:43:35.284 "data_offset": 2048, 00:43:35.284 "data_size": 63488 00:43:35.284 }, 00:43:35.284 { 00:43:35.284 "name": "BaseBdev4", 00:43:35.284 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:35.284 "is_configured": true, 00:43:35.284 "data_offset": 2048, 00:43:35.284 "data_size": 63488 00:43:35.284 } 00:43:35.284 ] 00:43:35.284 }' 00:43:35.284 19:36:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:35.284 19:36:51 -- common/autotest_common.sh@10 -- # set +x 00:43:35.866 19:36:51 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:43:36.124 [2024-04-18 19:36:51.973005] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:36.124 [2024-04-18 19:36:51.973050] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:36.124 [2024-04-18 19:36:51.973143] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:36.124 [2024-04-18 19:36:51.973254] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:36.124 [2024-04-18 19:36:51.973265] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:43:36.124 19:36:51 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:36.124 19:36:51 -- bdev/bdev_raid.sh@671 -- # jq length 00:43:36.383 19:36:52 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:43:36.383 19:36:52 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:43:36.383 19:36:52 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:43:36.383 19:36:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:36.383 19:36:52 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:43:36.383 19:36:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:36.383 19:36:52 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:43:36.383 19:36:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:36.383 19:36:52 -- bdev/nbd_common.sh@12 -- # local i 00:43:36.383 19:36:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:36.383 19:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:36.383 19:36:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:43:36.641 /dev/nbd0 00:43:36.899 19:36:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:36.899 19:36:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:36.899 19:36:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:43:36.899 19:36:52 -- common/autotest_common.sh@855 -- # local i 00:43:36.899 19:36:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:43:36.899 19:36:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:43:36.899 19:36:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:43:36.899 19:36:52 -- common/autotest_common.sh@859 -- # break 00:43:36.899 19:36:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:43:36.899 19:36:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:43:36.899 19:36:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:36.899 1+0 records in 00:43:36.899 1+0 records out 00:43:36.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678766 s, 6.0 MB/s 00:43:36.899 19:36:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:36.899 19:36:52 -- common/autotest_common.sh@872 -- # size=4096 00:43:36.899 19:36:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:36.899 19:36:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:43:36.899 19:36:52 -- common/autotest_common.sh@875 -- # return 0 00:43:36.899 19:36:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:36.899 19:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:36.899 19:36:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:43:36.899 /dev/nbd1 00:43:37.157 19:36:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:37.157 19:36:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:37.157 19:36:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:43:37.157 19:36:52 -- common/autotest_common.sh@855 -- # local i 00:43:37.157 19:36:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:43:37.157 19:36:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:43:37.157 19:36:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:43:37.157 19:36:52 -- common/autotest_common.sh@859 -- # break 00:43:37.157 19:36:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:43:37.157 19:36:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:43:37.157 19:36:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:37.157 1+0 records in 00:43:37.157 1+0 records out 00:43:37.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448799 s, 9.1 MB/s 00:43:37.157 19:36:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:37.157 19:36:52 -- common/autotest_common.sh@872 -- # size=4096 00:43:37.157 19:36:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:37.157 19:36:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:43:37.157 19:36:52 -- common/autotest_common.sh@875 -- # return 0 00:43:37.157 19:36:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:37.157 19:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:37.157 19:36:52 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:43:37.157 19:36:53 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:43:37.157 19:36:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:37.157 19:36:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:37.157 19:36:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:37.157 19:36:53 -- bdev/nbd_common.sh@51 -- # local i 00:43:37.157 19:36:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:37.157 19:36:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:43:37.415 19:36:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:37.415 19:36:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:37.415 19:36:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:37.415 19:36:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:37.415 19:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:37.415 19:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:37.415 19:36:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:43:37.672 19:36:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:43:37.672 19:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:37.672 19:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:37.672 19:36:53 -- bdev/nbd_common.sh@41 -- # break 00:43:37.672 19:36:53 -- bdev/nbd_common.sh@45 -- # return 0 00:43:37.672 19:36:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:37.672 19:36:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@41 -- # break 00:43:37.929 19:36:53 -- bdev/nbd_common.sh@45 -- # return 0 00:43:37.929 19:36:53 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:43:37.929 19:36:53 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:43:37.929 19:36:53 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:43:37.929 19:36:53 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:43:38.187 19:36:54 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:38.445 [2024-04-18 19:36:54.308020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:38.445 [2024-04-18 19:36:54.308124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:38.445 [2024-04-18 19:36:54.308168] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:43:38.445 [2024-04-18 19:36:54.308190] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:38.445 [2024-04-18 19:36:54.310787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:38.445 [2024-04-18 19:36:54.310869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:38.445 [2024-04-18 19:36:54.310997] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:43:38.445 [2024-04-18 19:36:54.311061] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:38.445 BaseBdev1 00:43:38.445 19:36:54 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:43:38.445 19:36:54 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:43:38.446 19:36:54 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:43:38.733 19:36:54 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:43:38.997 [2024-04-18 19:36:54.824110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:43:38.997 [2024-04-18 19:36:54.824217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:38.997 [2024-04-18 19:36:54.824265] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:43:38.997 [2024-04-18 19:36:54.824287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:38.997 [2024-04-18 19:36:54.824794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:38.997 [2024-04-18 19:36:54.824863] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:43:38.997 [2024-04-18 19:36:54.824973] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:43:38.997 [2024-04-18 19:36:54.824986] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:43:38.997 [2024-04-18 19:36:54.824995] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:38.997 [2024-04-18 19:36:54.825016] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:43:38.997 [2024-04-18 19:36:54.825090] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:38.997 BaseBdev2 00:43:38.997 19:36:54 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:43:38.997 19:36:54 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:43:38.997 19:36:54 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:43:39.256 19:36:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:43:39.515 [2024-04-18 19:36:55.404243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:43:39.515 [2024-04-18 19:36:55.404353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:39.515 [2024-04-18 19:36:55.404395] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:43:39.515 [2024-04-18 19:36:55.404422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:39.515 [2024-04-18 19:36:55.404939] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:39.515 [2024-04-18 19:36:55.405011] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:43:39.515 [2024-04-18 19:36:55.405133] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:43:39.515 [2024-04-18 19:36:55.405162] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:39.515 BaseBdev3 00:43:39.515 19:36:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:43:39.515 19:36:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:43:39.515 19:36:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:43:39.773 19:36:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:43:40.339 [2024-04-18 19:36:56.016448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:43:40.339 [2024-04-18 19:36:56.016591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:40.339 [2024-04-18 19:36:56.016649] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:43:40.339 [2024-04-18 19:36:56.016695] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:40.339 [2024-04-18 19:36:56.017356] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:40.339 [2024-04-18 19:36:56.017469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:43:40.339 [2024-04-18 19:36:56.017632] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:43:40.339 [2024-04-18 19:36:56.017675] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:43:40.339 BaseBdev4 00:43:40.339 19:36:56 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:43:40.598 19:36:56 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:43:40.856 [2024-04-18 19:36:56.548542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:40.856 [2024-04-18 19:36:56.548652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:40.856 [2024-04-18 19:36:56.548690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:43:40.856 [2024-04-18 19:36:56.548721] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:40.856 [2024-04-18 19:36:56.549267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:40.856 [2024-04-18 19:36:56.549325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:40.856 [2024-04-18 19:36:56.549469] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:43:40.856 [2024-04-18 19:36:56.549504] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:40.856 spare 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:40.856 19:36:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:40.856 [2024-04-18 19:36:56.649639] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:43:40.856 [2024-04-18 19:36:56.649691] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:43:40.856 [2024-04-18 19:36:56.649878] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004cc50 00:43:40.856 [2024-04-18 19:36:56.658765] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:43:40.856 [2024-04-18 19:36:56.658800] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:43:40.856 [2024-04-18 19:36:56.659025] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:41.114 19:36:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:41.114 "name": "raid_bdev1", 00:43:41.114 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:41.114 "strip_size_kb": 64, 00:43:41.114 "state": "online", 00:43:41.114 "raid_level": "raid5f", 00:43:41.114 "superblock": true, 00:43:41.114 "num_base_bdevs": 4, 00:43:41.114 "num_base_bdevs_discovered": 4, 00:43:41.114 "num_base_bdevs_operational": 4, 00:43:41.114 "base_bdevs_list": [ 00:43:41.114 { 00:43:41.114 "name": "spare", 00:43:41.114 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:41.114 "is_configured": true, 00:43:41.114 "data_offset": 2048, 00:43:41.114 "data_size": 63488 00:43:41.114 }, 00:43:41.114 { 00:43:41.114 "name": "BaseBdev2", 00:43:41.114 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:41.114 "is_configured": true, 00:43:41.114 "data_offset": 2048, 00:43:41.114 "data_size": 63488 00:43:41.114 }, 00:43:41.114 { 00:43:41.114 "name": "BaseBdev3", 00:43:41.115 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:41.115 "is_configured": true, 00:43:41.115 "data_offset": 2048, 00:43:41.115 "data_size": 63488 00:43:41.115 }, 00:43:41.115 { 00:43:41.115 "name": "BaseBdev4", 00:43:41.115 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:41.115 "is_configured": true, 00:43:41.115 "data_offset": 2048, 00:43:41.115 "data_size": 63488 00:43:41.115 } 00:43:41.115 ] 00:43:41.115 }' 00:43:41.115 19:36:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:41.115 19:36:56 -- common/autotest_common.sh@10 -- # set +x 00:43:41.681 19:36:57 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:41.681 19:36:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:41.681 19:36:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:43:41.681 19:36:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:43:41.681 19:36:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:41.681 19:36:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:41.681 19:36:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:41.939 19:36:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:41.939 "name": "raid_bdev1", 00:43:41.939 "uuid": "18b82d9d-12a2-476e-a8a0-dbc37695dd36", 00:43:41.939 "strip_size_kb": 64, 00:43:41.939 "state": "online", 00:43:41.939 "raid_level": "raid5f", 00:43:41.939 "superblock": true, 00:43:41.939 "num_base_bdevs": 4, 00:43:41.939 "num_base_bdevs_discovered": 4, 00:43:41.939 "num_base_bdevs_operational": 4, 00:43:41.939 "base_bdevs_list": [ 00:43:41.939 { 00:43:41.939 "name": "spare", 00:43:41.939 "uuid": "5a31d715-884f-5a25-b087-2f4741bacfe2", 00:43:41.939 "is_configured": true, 00:43:41.939 "data_offset": 2048, 00:43:41.939 "data_size": 63488 00:43:41.939 }, 00:43:41.939 { 00:43:41.939 "name": "BaseBdev2", 00:43:41.939 "uuid": "38c0ec6b-6970-5b69-a554-373d1cf8b368", 00:43:41.939 "is_configured": true, 00:43:41.939 "data_offset": 2048, 00:43:41.939 "data_size": 63488 00:43:41.939 }, 00:43:41.939 { 00:43:41.939 "name": "BaseBdev3", 00:43:41.939 "uuid": "827b3301-370c-55d3-bc7f-69b67bfb16d3", 00:43:41.939 "is_configured": true, 00:43:41.939 "data_offset": 2048, 00:43:41.939 "data_size": 63488 00:43:41.939 }, 00:43:41.939 { 00:43:41.939 "name": "BaseBdev4", 00:43:41.939 "uuid": "d30ec4ee-92a7-571c-9d30-2a635beffd2f", 00:43:41.939 "is_configured": true, 00:43:41.939 "data_offset": 2048, 00:43:41.939 "data_size": 63488 00:43:41.939 } 00:43:41.939 ] 00:43:41.939 }' 00:43:41.939 19:36:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:42.199 19:36:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:43:42.199 19:36:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:42.199 19:36:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:43:42.199 19:36:57 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:42.199 19:36:57 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:43:42.458 19:36:58 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:43:42.458 19:36:58 -- bdev/bdev_raid.sh@709 -- # killprocess 142921 00:43:42.458 19:36:58 -- common/autotest_common.sh@936 -- # '[' -z 142921 ']' 00:43:42.458 19:36:58 -- common/autotest_common.sh@940 -- # kill -0 142921 00:43:42.458 19:36:58 -- common/autotest_common.sh@941 -- # uname 00:43:42.458 19:36:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:43:42.458 19:36:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142921 00:43:42.458 killing process with pid 142921 00:43:42.458 Received shutdown signal, test time was about 60.000000 seconds 00:43:42.458 00:43:42.458 Latency(us) 00:43:42.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:42.458 =================================================================================================================== 00:43:42.458 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:42.458 19:36:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:43:42.458 19:36:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:43:42.458 19:36:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142921' 00:43:42.458 19:36:58 -- common/autotest_common.sh@955 -- # kill 142921 00:43:42.458 19:36:58 -- common/autotest_common.sh@960 -- # wait 142921 00:43:42.458 [2024-04-18 19:36:58.171632] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:42.458 [2024-04-18 19:36:58.171725] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:42.458 [2024-04-18 19:36:58.171813] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:42.458 [2024-04-18 19:36:58.171832] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:43:43.025 [2024-04-18 19:36:58.735426] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:44.398 ************************************ 00:43:44.398 END TEST raid5f_rebuild_test_sb 00:43:44.398 ************************************ 00:43:44.398 19:37:00 -- bdev/bdev_raid.sh@711 -- # return 0 00:43:44.398 00:43:44.398 real 0m33.121s 00:43:44.398 user 0m50.900s 00:43:44.398 sys 0m3.643s 00:43:44.398 19:37:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:44.398 19:37:00 -- common/autotest_common.sh@10 -- # set +x 00:43:44.398 19:37:00 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:43:44.398 00:43:44.398 real 13m54.392s 00:43:44.398 user 22m53.581s 00:43:44.398 sys 1m51.096s 00:43:44.398 19:37:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:44.398 19:37:00 -- common/autotest_common.sh@10 -- # set +x 00:43:44.398 ************************************ 00:43:44.398 END TEST bdev_raid 00:43:44.398 ************************************ 00:43:44.398 19:37:00 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:43:44.398 19:37:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:43:44.398 19:37:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:44.398 19:37:00 -- common/autotest_common.sh@10 -- # set +x 00:43:44.680 ************************************ 00:43:44.680 START TEST bdevperf_config 00:43:44.680 ************************************ 00:43:44.680 19:37:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:43:44.680 * Looking for test storage... 00:43:44.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:43:44.680 19:37:00 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:43:44.680 19:37:00 -- bdevperf/common.sh@8 -- # local job_section=global 00:43:44.680 19:37:00 -- bdevperf/common.sh@9 -- # local rw=read 00:43:44.680 19:37:00 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:43:44.680 19:37:00 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:43:44.680 19:37:00 -- bdevperf/common.sh@13 -- # cat 00:43:44.680 19:37:00 -- bdevperf/common.sh@18 -- # job='[global]' 00:43:44.680 00:43:44.680 19:37:00 -- bdevperf/common.sh@19 -- # echo 00:43:44.680 19:37:00 -- bdevperf/common.sh@20 -- # cat 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@18 -- # create_job job0 00:43:44.680 19:37:00 -- bdevperf/common.sh@8 -- # local job_section=job0 00:43:44.680 19:37:00 -- bdevperf/common.sh@9 -- # local rw= 00:43:44.680 19:37:00 -- bdevperf/common.sh@10 -- # local filename= 00:43:44.680 19:37:00 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:43:44.680 19:37:00 -- bdevperf/common.sh@18 -- # job='[job0]' 00:43:44.680 00:43:44.680 19:37:00 -- bdevperf/common.sh@19 -- # echo 00:43:44.680 19:37:00 -- bdevperf/common.sh@20 -- # cat 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@19 -- # create_job job1 00:43:44.680 19:37:00 -- bdevperf/common.sh@8 -- # local job_section=job1 00:43:44.680 19:37:00 -- bdevperf/common.sh@9 -- # local rw= 00:43:44.680 19:37:00 -- bdevperf/common.sh@10 -- # local filename= 00:43:44.680 19:37:00 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:43:44.680 00:43:44.680 19:37:00 -- bdevperf/common.sh@18 -- # job='[job1]' 00:43:44.680 19:37:00 -- bdevperf/common.sh@19 -- # echo 00:43:44.680 19:37:00 -- bdevperf/common.sh@20 -- # cat 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@20 -- # create_job job2 00:43:44.680 19:37:00 -- bdevperf/common.sh@8 -- # local job_section=job2 00:43:44.680 19:37:00 -- bdevperf/common.sh@9 -- # local rw= 00:43:44.680 19:37:00 -- bdevperf/common.sh@10 -- # local filename= 00:43:44.680 19:37:00 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:43:44.680 19:37:00 -- bdevperf/common.sh@18 -- # job='[job2]' 00:43:44.680 00:43:44.680 19:37:00 -- bdevperf/common.sh@19 -- # echo 00:43:44.680 19:37:00 -- bdevperf/common.sh@20 -- # cat 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@21 -- # create_job job3 00:43:44.680 19:37:00 -- bdevperf/common.sh@8 -- # local job_section=job3 00:43:44.680 19:37:00 -- bdevperf/common.sh@9 -- # local rw= 00:43:44.680 19:37:00 -- bdevperf/common.sh@10 -- # local filename= 00:43:44.680 00:43:44.680 19:37:00 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:43:44.680 19:37:00 -- bdevperf/common.sh@18 -- # job='[job3]' 00:43:44.680 19:37:00 -- bdevperf/common.sh@19 -- # echo 00:43:44.680 19:37:00 -- bdevperf/common.sh@20 -- # cat 00:43:44.680 19:37:00 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:43:49.953 19:37:05 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-04-18 19:37:00.531948] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:43:49.953 [2024-04-18 19:37:00.532140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143809 ] 00:43:49.953 Using job config with 4 jobs 00:43:49.953 [2024-04-18 19:37:00.705307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:49.954 [2024-04-18 19:37:01.012493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:49.954 cpumask for '\''job0'\'' is too big 00:43:49.954 cpumask for '\''job1'\'' is too big 00:43:49.954 cpumask for '\''job2'\'' is too big 00:43:49.954 cpumask for '\''job3'\'' is too big 00:43:49.954 Running I/O for 2 seconds... 00:43:49.954 00:43:49.954 Latency(us) 00:43:49.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28925.36 28.25 0.00 0.00 8842.89 1622.80 14043.43 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28902.92 28.23 0.00 0.00 8832.38 1568.18 12545.46 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28881.88 28.20 0.00 0.00 8821.73 1614.99 10985.08 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28860.90 28.18 0.00 0.00 8811.87 1583.79 9799.19 00:43:49.954 =================================================================================================================== 00:43:49.954 Total : 115571.07 112.86 0.00 0.00 8827.22 1568.18 14043.43' 00:43:49.954 19:37:05 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-04-18 19:37:00.531948] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:43:49.954 [2024-04-18 19:37:00.532140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143809 ] 00:43:49.954 Using job config with 4 jobs 00:43:49.954 [2024-04-18 19:37:00.705307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:49.954 [2024-04-18 19:37:01.012493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:49.954 cpumask for '\''job0'\'' is too big 00:43:49.954 cpumask for '\''job1'\'' is too big 00:43:49.954 cpumask for '\''job2'\'' is too big 00:43:49.954 cpumask for '\''job3'\'' is too big 00:43:49.954 Running I/O for 2 seconds... 00:43:49.954 00:43:49.954 Latency(us) 00:43:49.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28925.36 28.25 0.00 0.00 8842.89 1622.80 14043.43 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28902.92 28.23 0.00 0.00 8832.38 1568.18 12545.46 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28881.88 28.20 0.00 0.00 8821.73 1614.99 10985.08 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28860.90 28.18 0.00 0.00 8811.87 1583.79 9799.19 00:43:49.954 =================================================================================================================== 00:43:49.954 Total : 115571.07 112.86 0.00 0.00 8827.22 1568.18 14043.43' 00:43:49.954 19:37:05 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:43:49.954 19:37:05 -- bdevperf/common.sh@32 -- # echo '[2024-04-18 19:37:00.531948] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:43:49.954 [2024-04-18 19:37:00.532140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143809 ] 00:43:49.954 Using job config with 4 jobs 00:43:49.954 [2024-04-18 19:37:00.705307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:49.954 [2024-04-18 19:37:01.012493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:49.954 cpumask for '\''job0'\'' is too big 00:43:49.954 cpumask for '\''job1'\'' is too big 00:43:49.954 cpumask for '\''job2'\'' is too big 00:43:49.954 cpumask for '\''job3'\'' is too big 00:43:49.954 Running I/O for 2 seconds... 00:43:49.954 00:43:49.954 Latency(us) 00:43:49.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28925.36 28.25 0.00 0.00 8842.89 1622.80 14043.43 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28902.92 28.23 0.00 0.00 8832.38 1568.18 12545.46 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28881.88 28.20 0.00 0.00 8821.73 1614.99 10985.08 00:43:49.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:49.954 Malloc0 : 2.02 28860.90 28.18 0.00 0.00 8811.87 1583.79 9799.19 00:43:49.954 =================================================================================================================== 00:43:49.954 Total : 115571.07 112.86 0.00 0.00 8827.22 1568.18 14043.43' 00:43:49.954 19:37:05 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:43:49.954 19:37:05 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:43:49.954 19:37:05 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:43:49.954 [2024-04-18 19:37:05.556029] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:43:49.954 [2024-04-18 19:37:05.556214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143876 ] 00:43:49.954 [2024-04-18 19:37:05.730388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:50.212 [2024-04-18 19:37:05.970801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:50.778 cpumask for 'job0' is too big 00:43:50.778 cpumask for 'job1' is too big 00:43:50.778 cpumask for 'job2' is too big 00:43:50.778 cpumask for 'job3' is too big 00:43:54.966 19:37:10 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:43:54.966 Running I/O for 2 seconds... 00:43:54.966 00:43:54.966 Latency(us) 00:43:54.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:54.966 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:54.966 Malloc0 : 2.01 28372.91 27.71 0.00 0.00 9013.87 1958.28 17226.61 00:43:54.966 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:54.966 Malloc0 : 2.02 28383.26 27.72 0.00 0.00 8989.97 1942.67 16103.13 00:43:54.966 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:54.966 Malloc0 : 2.02 28361.82 27.70 0.00 0.00 8974.93 1950.48 14293.09 00:43:54.966 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:43:54.966 Malloc0 : 2.02 28340.99 27.68 0.00 0.00 8960.73 1966.08 12420.63 00:43:54.966 =================================================================================================================== 00:43:54.966 Total : 113458.98 110.80 0.00 0.00 8984.84 1942.67 17226.61' 00:43:54.966 19:37:10 -- bdevperf/test_config.sh@27 -- # cleanup 00:43:54.966 19:37:10 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:43:54.966 19:37:10 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:43:54.966 19:37:10 -- bdevperf/common.sh@8 -- # local job_section=job0 00:43:54.966 19:37:10 -- bdevperf/common.sh@9 -- # local rw=write 00:43:54.966 19:37:10 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:43:54.966 19:37:10 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:43:54.966 00:43:54.966 19:37:10 -- bdevperf/common.sh@18 -- # job='[job0]' 00:43:54.966 19:37:10 -- bdevperf/common.sh@19 -- # echo 00:43:54.966 19:37:10 -- bdevperf/common.sh@20 -- # cat 00:43:54.966 19:37:10 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:43:54.966 19:37:10 -- bdevperf/common.sh@8 -- # local job_section=job1 00:43:54.966 19:37:10 -- bdevperf/common.sh@9 -- # local rw=write 00:43:54.966 19:37:10 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:43:54.966 19:37:10 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:43:54.966 19:37:10 -- bdevperf/common.sh@18 -- # job='[job1]' 00:43:54.966 00:43:54.966 19:37:10 -- bdevperf/common.sh@19 -- # echo 00:43:54.966 19:37:10 -- bdevperf/common.sh@20 -- # cat 00:43:54.966 19:37:10 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:43:54.966 19:37:10 -- bdevperf/common.sh@8 -- # local job_section=job2 00:43:54.966 19:37:10 -- bdevperf/common.sh@9 -- # local rw=write 00:43:54.966 19:37:10 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:43:54.966 19:37:10 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:43:54.966 00:43:54.966 19:37:10 -- bdevperf/common.sh@18 -- # job='[job2]' 00:43:54.966 19:37:10 -- bdevperf/common.sh@19 -- # echo 00:43:54.966 19:37:10 -- bdevperf/common.sh@20 -- # cat 00:43:54.966 19:37:10 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-04-18 19:37:10.480919] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:00.233 [2024-04-18 19:37:10.481163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143959 ] 00:44:00.233 Using job config with 3 jobs 00:44:00.233 [2024-04-18 19:37:10.661009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:00.233 [2024-04-18 19:37:10.894919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:00.233 cpumask for '\''job0'\'' is too big 00:44:00.233 cpumask for '\''job1'\'' is too big 00:44:00.233 cpumask for '\''job2'\'' is too big 00:44:00.233 Running I/O for 2 seconds... 00:44:00.233 00:44:00.233 Latency(us) 00:44:00.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:00.233 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:44:00.233 Malloc0 : 2.01 37472.05 36.59 0.00 0.00 6824.27 1833.45 12795.12 00:44:00.233 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:44:00.233 Malloc0 : 2.01 37482.28 36.60 0.00 0.00 6807.39 1755.43 12732.71 00:44:00.233 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:44:00.233 Malloc0 : 2.02 37452.35 36.57 0.00 0.00 6798.72 1786.64 12545.46 00:44:00.233 =================================================================================================================== 00:44:00.233 Total : 112406.67 109.77 0.00 0.00 6810.11 1755.43 12795.12' 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-04-18 19:37:10.480919] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:00.233 [2024-04-18 19:37:10.481163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143959 ] 00:44:00.233 Using job config with 3 jobs 00:44:00.233 [2024-04-18 19:37:10.661009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:00.233 [2024-04-18 19:37:10.894919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:00.233 cpumask for '\''job0'\'' is too big 00:44:00.233 cpumask for '\''job1'\'' is too big 00:44:00.233 cpumask for '\''job2'\'' is too big 00:44:00.233 Running I/O for 2 seconds... 00:44:00.233 00:44:00.233 Latency(us) 00:44:00.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:00.233 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:44:00.233 Malloc0 : 2.01 37472.05 36.59 0.00 0.00 6824.27 1833.45 12795.12 00:44:00.233 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:44:00.233 Malloc0 : 2.01 37482.28 36.60 0.00 0.00 6807.39 1755.43 12732.71 00:44:00.233 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:44:00.233 Malloc0 : 2.02 37452.35 36.57 0.00 0.00 6798.72 1786.64 12545.46 00:44:00.233 =================================================================================================================== 00:44:00.233 Total : 112406.67 109.77 0.00 0.00 6810.11 1755.43 12795.12' 00:44:00.233 19:37:15 -- bdevperf/common.sh@32 -- # echo '[2024-04-18 19:37:10.480919] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:00.233 [2024-04-18 19:37:10.481163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143959 ] 00:44:00.233 Using job config with 3 jobs 00:44:00.233 [2024-04-18 19:37:10.661009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:00.233 [2024-04-18 19:37:10.894919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:00.233 cpumask for '\''job0'\'' is too big 00:44:00.233 cpumask for '\''job1'\'' is too big 00:44:00.233 cpumask for '\''job2'\'' is too big 00:44:00.233 Running I/O for 2 seconds... 00:44:00.233 00:44:00.233 Latency(us) 00:44:00.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:00.233 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:44:00.233 Malloc0 : 2.01 37472.05 36.59 0.00 0.00 6824.27 1833.45 12795.12 00:44:00.233 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:44:00.233 Malloc0 : 2.01 37482.28 36.60 0.00 0.00 6807.39 1755.43 12732.71 00:44:00.233 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:44:00.233 Malloc0 : 2.02 37452.35 36.57 0.00 0.00 6798.72 1786.64 12545.46 00:44:00.233 =================================================================================================================== 00:44:00.233 Total : 112406.67 109.77 0.00 0.00 6810.11 1755.43 12795.12' 00:44:00.233 19:37:15 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:44:00.233 19:37:15 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@35 -- # cleanup 00:44:00.233 19:37:15 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:44:00.233 19:37:15 -- bdevperf/common.sh@8 -- # local job_section=global 00:44:00.233 19:37:15 -- bdevperf/common.sh@9 -- # local rw=rw 00:44:00.233 19:37:15 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:44:00.233 19:37:15 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:44:00.233 19:37:15 -- bdevperf/common.sh@13 -- # cat 00:44:00.233 19:37:15 -- bdevperf/common.sh@18 -- # job='[global]' 00:44:00.233 00:44:00.233 19:37:15 -- bdevperf/common.sh@19 -- # echo 00:44:00.233 19:37:15 -- bdevperf/common.sh@20 -- # cat 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@38 -- # create_job job0 00:44:00.233 19:37:15 -- bdevperf/common.sh@8 -- # local job_section=job0 00:44:00.233 19:37:15 -- bdevperf/common.sh@9 -- # local rw= 00:44:00.233 19:37:15 -- bdevperf/common.sh@10 -- # local filename= 00:44:00.233 19:37:15 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:44:00.233 19:37:15 -- bdevperf/common.sh@18 -- # job='[job0]' 00:44:00.233 00:44:00.233 19:37:15 -- bdevperf/common.sh@19 -- # echo 00:44:00.233 19:37:15 -- bdevperf/common.sh@20 -- # cat 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@39 -- # create_job job1 00:44:00.233 19:37:15 -- bdevperf/common.sh@8 -- # local job_section=job1 00:44:00.233 19:37:15 -- bdevperf/common.sh@9 -- # local rw= 00:44:00.233 19:37:15 -- bdevperf/common.sh@10 -- # local filename= 00:44:00.233 00:44:00.233 19:37:15 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:44:00.233 19:37:15 -- bdevperf/common.sh@18 -- # job='[job1]' 00:44:00.233 19:37:15 -- bdevperf/common.sh@19 -- # echo 00:44:00.233 19:37:15 -- bdevperf/common.sh@20 -- # cat 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@40 -- # create_job job2 00:44:00.233 19:37:15 -- bdevperf/common.sh@8 -- # local job_section=job2 00:44:00.233 19:37:15 -- bdevperf/common.sh@9 -- # local rw= 00:44:00.233 19:37:15 -- bdevperf/common.sh@10 -- # local filename= 00:44:00.233 19:37:15 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:44:00.233 00:44:00.233 19:37:15 -- bdevperf/common.sh@18 -- # job='[job2]' 00:44:00.233 19:37:15 -- bdevperf/common.sh@19 -- # echo 00:44:00.233 19:37:15 -- bdevperf/common.sh@20 -- # cat 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@41 -- # create_job job3 00:44:00.233 19:37:15 -- bdevperf/common.sh@8 -- # local job_section=job3 00:44:00.233 19:37:15 -- bdevperf/common.sh@9 -- # local rw= 00:44:00.233 19:37:15 -- bdevperf/common.sh@10 -- # local filename= 00:44:00.233 19:37:15 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:44:00.233 19:37:15 -- bdevperf/common.sh@18 -- # job='[job3]' 00:44:00.233 19:37:15 -- bdevperf/common.sh@19 -- # echo 00:44:00.233 00:44:00.233 19:37:15 -- bdevperf/common.sh@20 -- # cat 00:44:00.233 19:37:15 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:44:04.415 19:37:20 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-04-18 19:37:15.394012] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:04.415 [2024-04-18 19:37:15.394219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144018 ] 00:44:04.415 Using job config with 4 jobs 00:44:04.415 [2024-04-18 19:37:15.574141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.415 [2024-04-18 19:37:15.848670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:04.415 cpumask for '\''job0'\'' is too big 00:44:04.415 cpumask for '\''job1'\'' is too big 00:44:04.415 cpumask for '\''job2'\'' is too big 00:44:04.415 cpumask for '\''job3'\'' is too big 00:44:04.415 Running I/O for 2 seconds... 00:44:04.415 00:44:04.415 Latency(us) 00:44:04.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:04.415 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.415 Malloc0 : 2.02 14195.94 13.86 0.00 0.00 18019.41 3308.01 27462.70 00:44:04.415 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.415 Malloc1 : 2.04 14197.88 13.87 0.00 0.00 18002.70 3838.54 27462.70 00:44:04.415 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.415 Malloc0 : 2.04 14186.97 13.85 0.00 0.00 17962.42 3183.18 24217.11 00:44:04.415 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.415 Malloc1 : 2.04 14175.74 13.84 0.00 0.00 17961.63 3822.93 24217.11 00:44:04.415 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.415 Malloc0 : 2.04 14164.94 13.83 0.00 0.00 17925.46 3167.57 21221.18 00:44:04.415 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.415 Malloc1 : 2.04 14153.77 13.82 0.00 0.00 17923.47 3838.54 21221.18 00:44:04.415 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.415 Malloc0 : 2.05 14143.04 13.81 0.00 0.00 17886.68 3167.57 19473.55 00:44:04.415 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.415 Malloc1 : 2.05 14131.93 13.80 0.00 0.00 17885.91 3947.76 19473.55 00:44:04.415 =================================================================================================================== 00:44:04.415 Total : 113350.22 110.69 0.00 0.00 17945.88 3167.57 27462.70' 00:44:04.415 19:37:20 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-04-18 19:37:15.394012] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:04.415 [2024-04-18 19:37:15.394219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144018 ] 00:44:04.415 Using job config with 4 jobs 00:44:04.415 [2024-04-18 19:37:15.574141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.415 [2024-04-18 19:37:15.848670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:04.415 cpumask for '\''job0'\'' is too big 00:44:04.415 cpumask for '\''job1'\'' is too big 00:44:04.415 cpumask for '\''job2'\'' is too big 00:44:04.415 cpumask for '\''job3'\'' is too big 00:44:04.415 Running I/O for 2 seconds... 00:44:04.415 00:44:04.415 Latency(us) 00:44:04.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:04.415 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.415 Malloc0 : 2.02 14195.94 13.86 0.00 0.00 18019.41 3308.01 27462.70 00:44:04.415 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc1 : 2.04 14197.88 13.87 0.00 0.00 18002.70 3838.54 27462.70 00:44:04.416 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc0 : 2.04 14186.97 13.85 0.00 0.00 17962.42 3183.18 24217.11 00:44:04.416 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc1 : 2.04 14175.74 13.84 0.00 0.00 17961.63 3822.93 24217.11 00:44:04.416 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc0 : 2.04 14164.94 13.83 0.00 0.00 17925.46 3167.57 21221.18 00:44:04.416 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc1 : 2.04 14153.77 13.82 0.00 0.00 17923.47 3838.54 21221.18 00:44:04.416 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc0 : 2.05 14143.04 13.81 0.00 0.00 17886.68 3167.57 19473.55 00:44:04.416 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc1 : 2.05 14131.93 13.80 0.00 0.00 17885.91 3947.76 19473.55 00:44:04.416 =================================================================================================================== 00:44:04.416 Total : 113350.22 110.69 0.00 0.00 17945.88 3167.57 27462.70' 00:44:04.416 19:37:20 -- bdevperf/common.sh@32 -- # echo '[2024-04-18 19:37:15.394012] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:04.416 [2024-04-18 19:37:15.394219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144018 ] 00:44:04.416 Using job config with 4 jobs 00:44:04.416 [2024-04-18 19:37:15.574141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.416 [2024-04-18 19:37:15.848670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:04.416 cpumask for '\''job0'\'' is too big 00:44:04.416 cpumask for '\''job1'\'' is too big 00:44:04.416 cpumask for '\''job2'\'' is too big 00:44:04.416 cpumask for '\''job3'\'' is too big 00:44:04.416 Running I/O for 2 seconds... 00:44:04.416 00:44:04.416 Latency(us) 00:44:04.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:04.416 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc0 : 2.02 14195.94 13.86 0.00 0.00 18019.41 3308.01 27462.70 00:44:04.416 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc1 : 2.04 14197.88 13.87 0.00 0.00 18002.70 3838.54 27462.70 00:44:04.416 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc0 : 2.04 14186.97 13.85 0.00 0.00 17962.42 3183.18 24217.11 00:44:04.416 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc1 : 2.04 14175.74 13.84 0.00 0.00 17961.63 3822.93 24217.11 00:44:04.416 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc0 : 2.04 14164.94 13.83 0.00 0.00 17925.46 3167.57 21221.18 00:44:04.416 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc1 : 2.04 14153.77 13.82 0.00 0.00 17923.47 3838.54 21221.18 00:44:04.416 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc0 : 2.05 14143.04 13.81 0.00 0.00 17886.68 3167.57 19473.55 00:44:04.416 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:44:04.416 Malloc1 : 2.05 14131.93 13.80 0.00 0.00 17885.91 3947.76 19473.55 00:44:04.416 =================================================================================================================== 00:44:04.416 Total : 113350.22 110.69 0.00 0.00 17945.88 3167.57 27462.70' 00:44:04.416 19:37:20 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:44:04.416 19:37:20 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:44:04.416 19:37:20 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:44:04.416 19:37:20 -- bdevperf/test_config.sh@44 -- # cleanup 00:44:04.416 19:37:20 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:44:04.416 19:37:20 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:44:04.416 00:44:04.416 real 0m19.997s 00:44:04.416 user 0m18.201s 00:44:04.416 sys 0m1.224s 00:44:04.416 19:37:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:04.675 19:37:20 -- common/autotest_common.sh@10 -- # set +x 00:44:04.675 ************************************ 00:44:04.675 END TEST bdevperf_config 00:44:04.675 ************************************ 00:44:04.675 19:37:20 -- spdk/autotest.sh@188 -- # uname -s 00:44:04.675 19:37:20 -- spdk/autotest.sh@188 -- # [[ Linux == Linux ]] 00:44:04.675 19:37:20 -- spdk/autotest.sh@189 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:44:04.675 19:37:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:44:04.675 19:37:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:04.675 19:37:20 -- common/autotest_common.sh@10 -- # set +x 00:44:04.675 ************************************ 00:44:04.675 START TEST reactor_set_interrupt 00:44:04.675 ************************************ 00:44:04.675 19:37:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:44:04.675 * Looking for test storage... 00:44:04.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:04.675 19:37:20 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:44:04.675 19:37:20 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:44:04.675 19:37:20 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:04.675 19:37:20 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:44:04.675 19:37:20 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:44:04.675 19:37:20 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:04.675 19:37:20 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:44:04.675 19:37:20 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:44:04.675 19:37:20 -- common/autotest_common.sh@34 -- # set -e 00:44:04.675 19:37:20 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:44:04.675 19:37:20 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:44:04.675 19:37:20 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:44:04.675 19:37:20 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:44:04.675 19:37:20 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:44:04.675 19:37:20 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:44:04.675 19:37:20 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:44:04.675 19:37:20 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:44:04.675 19:37:20 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:44:04.675 19:37:20 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:44:04.675 19:37:20 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:44:04.675 19:37:20 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:44:04.675 19:37:20 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:44:04.675 19:37:20 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:44:04.675 19:37:20 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:44:04.675 19:37:20 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:44:04.675 19:37:20 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:44:04.675 19:37:20 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:44:04.675 19:37:20 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:44:04.675 19:37:20 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:44:04.675 19:37:20 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:44:04.675 19:37:20 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:44:04.675 19:37:20 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:44:04.675 19:37:20 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:44:04.675 19:37:20 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:44:04.675 19:37:20 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:44:04.675 19:37:20 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:44:04.675 19:37:20 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:44:04.675 19:37:20 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:44:04.675 19:37:20 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:44:04.675 19:37:20 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:44:04.675 19:37:20 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:44:04.675 19:37:20 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:44:04.675 19:37:20 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:44:04.675 19:37:20 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:44:04.675 19:37:20 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:44:04.675 19:37:20 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:44:04.675 19:37:20 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:44:04.675 19:37:20 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:44:04.675 19:37:20 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:44:04.675 19:37:20 -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:44:04.675 19:37:20 -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:44:04.675 19:37:20 -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:44:04.675 19:37:20 -- common/build_config.sh@39 -- # CONFIG_ASAN=y 00:44:04.675 19:37:20 -- common/build_config.sh@40 -- # CONFIG_SHARED=n 00:44:04.675 19:37:20 -- common/build_config.sh@41 -- # CONFIG_VTUNE_DIR= 00:44:04.675 19:37:20 -- common/build_config.sh@42 -- # CONFIG_RDMA_SET_TOS=y 00:44:04.675 19:37:20 -- common/build_config.sh@43 -- # CONFIG_VBDEV_COMPRESS=n 00:44:04.675 19:37:20 -- common/build_config.sh@44 -- # CONFIG_VFIO_USER_DIR= 00:44:04.675 19:37:20 -- common/build_config.sh@45 -- # CONFIG_PGO_DIR= 00:44:04.675 19:37:20 -- common/build_config.sh@46 -- # CONFIG_FUZZER_LIB= 00:44:04.675 19:37:20 -- common/build_config.sh@47 -- # CONFIG_HAVE_EXECINFO_H=y 00:44:04.675 19:37:20 -- common/build_config.sh@48 -- # CONFIG_USDT=n 00:44:04.675 19:37:20 -- common/build_config.sh@49 -- # CONFIG_HAVE_KEYUTILS=y 00:44:04.675 19:37:20 -- common/build_config.sh@50 -- # CONFIG_URING_ZNS=n 00:44:04.675 19:37:20 -- common/build_config.sh@51 -- # CONFIG_FC_PATH= 00:44:04.675 19:37:20 -- common/build_config.sh@52 -- # CONFIG_COVERAGE=y 00:44:04.675 19:37:20 -- common/build_config.sh@53 -- # CONFIG_CUSTOMOCF=n 00:44:04.675 19:37:20 -- common/build_config.sh@54 -- # CONFIG_DPDK_PKG_CONFIG=n 00:44:04.675 19:37:20 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:44:04.675 19:37:20 -- common/build_config.sh@56 -- # CONFIG_DEBUG=y 00:44:04.675 19:37:20 -- common/build_config.sh@57 -- # CONFIG_RDMA=y 00:44:04.675 19:37:20 -- common/build_config.sh@58 -- # CONFIG_HAVE_ARC4RANDOM=n 00:44:04.675 19:37:20 -- common/build_config.sh@59 -- # CONFIG_FUZZER=n 00:44:04.675 19:37:20 -- common/build_config.sh@60 -- # CONFIG_FC=n 00:44:04.675 19:37:20 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:44:04.675 19:37:20 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBARCHIVE=n 00:44:04.675 19:37:20 -- common/build_config.sh@63 -- # CONFIG_DPDK_COMPRESSDEV=n 00:44:04.675 19:37:20 -- common/build_config.sh@64 -- # CONFIG_CROSS_PREFIX= 00:44:04.675 19:37:20 -- common/build_config.sh@65 -- # CONFIG_PREFIX=/usr/local 00:44:04.675 19:37:20 -- common/build_config.sh@66 -- # CONFIG_HAVE_LIBBSD=n 00:44:04.675 19:37:20 -- common/build_config.sh@67 -- # CONFIG_UBSAN=y 00:44:04.675 19:37:20 -- common/build_config.sh@68 -- # CONFIG_PGO_CAPTURE=n 00:44:04.675 19:37:20 -- common/build_config.sh@69 -- # CONFIG_UBLK=n 00:44:04.675 19:37:20 -- common/build_config.sh@70 -- # CONFIG_ISAL_CRYPTO=y 00:44:04.675 19:37:20 -- common/build_config.sh@71 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:44:04.675 19:37:20 -- common/build_config.sh@72 -- # CONFIG_CRYPTO=n 00:44:04.676 19:37:20 -- common/build_config.sh@73 -- # CONFIG_RBD=n 00:44:04.676 19:37:20 -- common/build_config.sh@74 -- # CONFIG_LIBDIR= 00:44:04.676 19:37:20 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB_DIR= 00:44:04.676 19:37:20 -- common/build_config.sh@76 -- # CONFIG_PGO_USE=n 00:44:04.676 19:37:20 -- common/build_config.sh@77 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:44:04.676 19:37:20 -- common/build_config.sh@78 -- # CONFIG_GOLANG=n 00:44:04.676 19:37:20 -- common/build_config.sh@79 -- # CONFIG_VHOST=y 00:44:04.676 19:37:20 -- common/build_config.sh@80 -- # CONFIG_IDXD=y 00:44:04.676 19:37:20 -- common/build_config.sh@81 -- # CONFIG_AVAHI=n 00:44:04.676 19:37:20 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:44:04.676 19:37:20 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:44:04.676 19:37:20 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:44:04.676 19:37:20 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:44:04.676 19:37:20 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:44:04.676 19:37:20 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:44:04.676 19:37:20 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:44:04.676 19:37:20 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:44:04.676 19:37:20 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:44:04.676 19:37:20 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:44:04.676 19:37:20 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:44:04.676 19:37:20 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:44:04.676 19:37:20 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:44:04.676 19:37:20 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:44:04.676 19:37:20 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:44:04.676 19:37:20 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:44:04.676 19:37:20 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:44:04.676 #define SPDK_CONFIG_H 00:44:04.676 #define SPDK_CONFIG_APPS 1 00:44:04.676 #define SPDK_CONFIG_ARCH native 00:44:04.676 #define SPDK_CONFIG_ASAN 1 00:44:04.676 #undef SPDK_CONFIG_AVAHI 00:44:04.676 #undef SPDK_CONFIG_CET 00:44:04.676 #define SPDK_CONFIG_COVERAGE 1 00:44:04.676 #define SPDK_CONFIG_CROSS_PREFIX 00:44:04.676 #undef SPDK_CONFIG_CRYPTO 00:44:04.676 #undef SPDK_CONFIG_CRYPTO_MLX5 00:44:04.676 #undef SPDK_CONFIG_CUSTOMOCF 00:44:04.676 #undef SPDK_CONFIG_DAOS 00:44:04.676 #define SPDK_CONFIG_DAOS_DIR 00:44:04.676 #define SPDK_CONFIG_DEBUG 1 00:44:04.676 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:44:04.676 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:44:04.676 #define SPDK_CONFIG_DPDK_INC_DIR 00:44:04.676 #define SPDK_CONFIG_DPDK_LIB_DIR 00:44:04.676 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:44:04.676 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:44:04.676 #define SPDK_CONFIG_EXAMPLES 1 00:44:04.676 #undef SPDK_CONFIG_FC 00:44:04.676 #define SPDK_CONFIG_FC_PATH 00:44:04.676 #define SPDK_CONFIG_FIO_PLUGIN 1 00:44:04.676 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:44:04.676 #undef SPDK_CONFIG_FUSE 00:44:04.676 #undef SPDK_CONFIG_FUZZER 00:44:04.676 #define SPDK_CONFIG_FUZZER_LIB 00:44:04.676 #undef SPDK_CONFIG_GOLANG 00:44:04.676 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:44:04.676 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:44:04.676 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:44:04.676 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:44:04.676 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:44:04.676 #undef SPDK_CONFIG_HAVE_LIBBSD 00:44:04.676 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:44:04.676 #define SPDK_CONFIG_IDXD 1 00:44:04.676 #undef SPDK_CONFIG_IDXD_KERNEL 00:44:04.676 #undef SPDK_CONFIG_IPSEC_MB 00:44:04.676 #define SPDK_CONFIG_IPSEC_MB_DIR 00:44:04.676 #define SPDK_CONFIG_ISAL 1 00:44:04.676 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:44:04.676 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:44:04.676 #define SPDK_CONFIG_LIBDIR 00:44:04.676 #undef SPDK_CONFIG_LTO 00:44:04.676 #define SPDK_CONFIG_MAX_LCORES 00:44:04.676 #define SPDK_CONFIG_NVME_CUSE 1 00:44:04.676 #undef SPDK_CONFIG_OCF 00:44:04.676 #define SPDK_CONFIG_OCF_PATH 00:44:04.676 #define SPDK_CONFIG_OPENSSL_PATH 00:44:04.676 #undef SPDK_CONFIG_PGO_CAPTURE 00:44:04.676 #define SPDK_CONFIG_PGO_DIR 00:44:04.676 #undef SPDK_CONFIG_PGO_USE 00:44:04.676 #define SPDK_CONFIG_PREFIX /usr/local 00:44:04.676 #define SPDK_CONFIG_RAID5F 1 00:44:04.676 #undef SPDK_CONFIG_RBD 00:44:04.676 #define SPDK_CONFIG_RDMA 1 00:44:04.676 #define SPDK_CONFIG_RDMA_PROV verbs 00:44:04.676 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:44:04.676 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:44:04.676 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:44:04.676 #undef SPDK_CONFIG_SHARED 00:44:04.676 #undef SPDK_CONFIG_SMA 00:44:04.676 #define SPDK_CONFIG_TESTS 1 00:44:04.676 #undef SPDK_CONFIG_TSAN 00:44:04.676 #undef SPDK_CONFIG_UBLK 00:44:04.676 #define SPDK_CONFIG_UBSAN 1 00:44:04.676 #define SPDK_CONFIG_UNIT_TESTS 1 00:44:04.676 #undef SPDK_CONFIG_URING 00:44:04.676 #define SPDK_CONFIG_URING_PATH 00:44:04.676 #undef SPDK_CONFIG_URING_ZNS 00:44:04.676 #undef SPDK_CONFIG_USDT 00:44:04.676 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:44:04.676 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:44:04.676 #undef SPDK_CONFIG_VFIO_USER 00:44:04.676 #define SPDK_CONFIG_VFIO_USER_DIR 00:44:04.676 #define SPDK_CONFIG_VHOST 1 00:44:04.676 #define SPDK_CONFIG_VIRTIO 1 00:44:04.676 #undef SPDK_CONFIG_VTUNE 00:44:04.676 #define SPDK_CONFIG_VTUNE_DIR 00:44:04.676 #define SPDK_CONFIG_WERROR 1 00:44:04.676 #define SPDK_CONFIG_WPDK_DIR 00:44:04.676 #undef SPDK_CONFIG_XNVME 00:44:04.676 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:44:04.676 19:37:20 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:44:04.676 19:37:20 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:04.676 19:37:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:04.676 19:37:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:04.676 19:37:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:04.676 19:37:20 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:04.676 19:37:20 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:04.676 19:37:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:04.676 19:37:20 -- paths/export.sh@5 -- # export PATH 00:44:04.676 19:37:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:04.676 19:37:20 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:44:04.676 19:37:20 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:44:04.676 19:37:20 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:44:04.676 19:37:20 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:44:04.676 19:37:20 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:44:04.676 19:37:20 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:44:04.676 19:37:20 -- pm/common@67 -- # TEST_TAG=N/A 00:44:04.676 19:37:20 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:44:04.676 19:37:20 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:44:04.676 19:37:20 -- pm/common@71 -- # uname -s 00:44:04.676 19:37:20 -- pm/common@71 -- # PM_OS=Linux 00:44:04.676 19:37:20 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:44:04.676 19:37:20 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:44:04.676 19:37:20 -- pm/common@76 -- # [[ Linux == Linux ]] 00:44:04.676 19:37:20 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:44:04.676 19:37:20 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:44:04.676 19:37:20 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:44:04.676 19:37:20 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:44:04.676 19:37:20 -- common/autotest_common.sh@57 -- # : 0 00:44:04.676 19:37:20 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:44:04.676 19:37:20 -- common/autotest_common.sh@61 -- # : 0 00:44:04.676 19:37:20 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:44:04.676 19:37:20 -- common/autotest_common.sh@63 -- # : 0 00:44:04.676 19:37:20 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:44:04.676 19:37:20 -- common/autotest_common.sh@65 -- # : 1 00:44:04.676 19:37:20 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:44:04.676 19:37:20 -- common/autotest_common.sh@67 -- # : 1 00:44:04.676 19:37:20 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:44:04.676 19:37:20 -- common/autotest_common.sh@69 -- # : 00:44:04.676 19:37:20 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:44:04.676 19:37:20 -- common/autotest_common.sh@71 -- # : 0 00:44:04.676 19:37:20 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:44:04.676 19:37:20 -- common/autotest_common.sh@73 -- # : 0 00:44:04.676 19:37:20 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:44:04.676 19:37:20 -- common/autotest_common.sh@75 -- # : 0 00:44:04.676 19:37:20 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:44:04.676 19:37:20 -- common/autotest_common.sh@77 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:44:04.677 19:37:20 -- common/autotest_common.sh@79 -- # : 1 00:44:04.677 19:37:20 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:44:04.677 19:37:20 -- common/autotest_common.sh@81 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:44:04.677 19:37:20 -- common/autotest_common.sh@83 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:44:04.677 19:37:20 -- common/autotest_common.sh@85 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:44:04.677 19:37:20 -- common/autotest_common.sh@87 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:44:04.677 19:37:20 -- common/autotest_common.sh@89 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:44:04.677 19:37:20 -- common/autotest_common.sh@91 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:44:04.677 19:37:20 -- common/autotest_common.sh@93 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:44:04.677 19:37:20 -- common/autotest_common.sh@95 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:44:04.677 19:37:20 -- common/autotest_common.sh@97 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:44:04.677 19:37:20 -- common/autotest_common.sh@99 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:44:04.677 19:37:20 -- common/autotest_common.sh@101 -- # : rdma 00:44:04.677 19:37:20 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:44:04.677 19:37:20 -- common/autotest_common.sh@103 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:44:04.677 19:37:20 -- common/autotest_common.sh@105 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:44:04.677 19:37:20 -- common/autotest_common.sh@107 -- # : 1 00:44:04.677 19:37:20 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:44:04.677 19:37:20 -- common/autotest_common.sh@109 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:44:04.677 19:37:20 -- common/autotest_common.sh@111 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:44:04.677 19:37:20 -- common/autotest_common.sh@113 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:44:04.677 19:37:20 -- common/autotest_common.sh@115 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:44:04.677 19:37:20 -- common/autotest_common.sh@117 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:44:04.677 19:37:20 -- common/autotest_common.sh@119 -- # : 1 00:44:04.677 19:37:20 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:44:04.677 19:37:20 -- common/autotest_common.sh@121 -- # : 1 00:44:04.677 19:37:20 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:44:04.677 19:37:20 -- common/autotest_common.sh@123 -- # : 00:44:04.677 19:37:20 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:44:04.677 19:37:20 -- common/autotest_common.sh@125 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:44:04.677 19:37:20 -- common/autotest_common.sh@127 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:44:04.677 19:37:20 -- common/autotest_common.sh@129 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:44:04.677 19:37:20 -- common/autotest_common.sh@131 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:44:04.677 19:37:20 -- common/autotest_common.sh@133 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:44:04.677 19:37:20 -- common/autotest_common.sh@135 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:44:04.677 19:37:20 -- common/autotest_common.sh@137 -- # : 00:44:04.677 19:37:20 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:44:04.677 19:37:20 -- common/autotest_common.sh@139 -- # : true 00:44:04.677 19:37:20 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:44:04.677 19:37:20 -- common/autotest_common.sh@141 -- # : 1 00:44:04.677 19:37:20 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:44:04.677 19:37:20 -- common/autotest_common.sh@143 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:44:04.677 19:37:20 -- common/autotest_common.sh@145 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:44:04.677 19:37:20 -- common/autotest_common.sh@147 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:44:04.677 19:37:20 -- common/autotest_common.sh@149 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:44:04.677 19:37:20 -- common/autotest_common.sh@151 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:44:04.677 19:37:20 -- common/autotest_common.sh@153 -- # : 00:44:04.677 19:37:20 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:44:04.677 19:37:20 -- common/autotest_common.sh@155 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:44:04.677 19:37:20 -- common/autotest_common.sh@157 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:44:04.677 19:37:20 -- common/autotest_common.sh@159 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:44:04.677 19:37:20 -- common/autotest_common.sh@161 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:44:04.677 19:37:20 -- common/autotest_common.sh@163 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:44:04.677 19:37:20 -- common/autotest_common.sh@166 -- # : 00:44:04.677 19:37:20 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:44:04.677 19:37:20 -- common/autotest_common.sh@168 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:44:04.677 19:37:20 -- common/autotest_common.sh@170 -- # : 0 00:44:04.677 19:37:20 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:44:04.677 19:37:20 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:44:04.677 19:37:20 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:44:04.677 19:37:20 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:44:04.677 19:37:20 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:44:04.677 19:37:20 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:04.677 19:37:20 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:04.677 19:37:20 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:04.677 19:37:20 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:04.677 19:37:20 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:44:04.677 19:37:20 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:44:04.677 19:37:20 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:44:04.677 19:37:20 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:44:04.677 19:37:20 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:44:04.677 19:37:20 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:44:04.677 19:37:20 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:44:04.677 19:37:20 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:44:04.677 19:37:20 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:44:04.677 19:37:20 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:44:04.677 19:37:20 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:44:04.677 19:37:20 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:44:04.677 19:37:20 -- common/autotest_common.sh@199 -- # cat 00:44:04.677 19:37:20 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:44:04.677 19:37:20 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:44:04.677 19:37:20 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:44:04.677 19:37:20 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:44:04.677 19:37:20 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:44:04.677 19:37:20 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:44:04.677 19:37:20 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:44:04.677 19:37:20 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:44:04.677 19:37:20 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:44:04.677 19:37:20 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:44:04.677 19:37:20 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:44:04.677 19:37:20 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:44:04.677 19:37:20 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:44:04.677 19:37:20 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:44:04.678 19:37:20 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:44:04.678 19:37:20 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:44:04.678 19:37:20 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:44:04.678 19:37:20 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:44:04.678 19:37:20 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:44:04.678 19:37:20 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:44:04.678 19:37:20 -- common/autotest_common.sh@252 -- # export valgrind= 00:44:04.678 19:37:20 -- common/autotest_common.sh@252 -- # valgrind= 00:44:04.678 19:37:20 -- common/autotest_common.sh@258 -- # uname -s 00:44:04.678 19:37:20 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:44:04.678 19:37:20 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:44:04.678 19:37:20 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:44:04.678 19:37:20 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:44:04.678 19:37:20 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:44:04.678 19:37:20 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:44:04.678 19:37:20 -- common/autotest_common.sh@268 -- # MAKE=make 00:44:04.678 19:37:20 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:44:04.678 19:37:20 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:44:04.678 19:37:20 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:44:04.678 19:37:20 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:44:04.678 19:37:20 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:44:04.678 19:37:20 -- common/autotest_common.sh@307 -- # [[ -z 144141 ]] 00:44:04.678 19:37:20 -- common/autotest_common.sh@307 -- # kill -0 144141 00:44:04.937 19:37:20 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:44:04.937 19:37:20 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:44:04.937 19:37:20 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:44:04.937 19:37:20 -- common/autotest_common.sh@320 -- # local mount target_dir 00:44:04.937 19:37:20 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:44:04.937 19:37:20 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:44:04.937 19:37:20 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:44:04.937 19:37:20 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:44:04.937 19:37:20 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.eIQs5v 00:44:04.937 19:37:20 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:44:04.937 19:37:20 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:44:04.937 19:37:20 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:44:04.937 19:37:20 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.eIQs5v/tests/interrupt /tmp/spdk.eIQs5v 00:44:04.937 19:37:20 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:44:04.937 19:37:20 -- common/autotest_common.sh@316 -- # df -T 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=udev 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=6224465920 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6224465920 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=1249759232 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1254514688 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=4755456 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=10291875840 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=10308141056 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=6269952000 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6272565248 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=6272565248 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6272565248 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop0 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=67108864 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=103089152 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109422592 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop2 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=41025536 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=41025536 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop1 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=96337920 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=96337920 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=1254510592 00:44:04.937 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1254510592 00:44:04.937 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:44:04.937 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt/output 00:44:04.937 19:37:20 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:44:04.938 19:37:20 -- common/autotest_common.sh@351 -- # avails["$mount"]=90222780416 00:44:04.938 19:37:20 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:44:04.938 19:37:20 -- common/autotest_common.sh@352 -- # uses["$mount"]=9479999488 00:44:04.938 19:37:20 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:04.938 19:37:20 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:44:04.938 * Looking for test storage... 00:44:04.938 19:37:20 -- common/autotest_common.sh@357 -- # local target_space new_size 00:44:04.938 19:37:20 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:44:04.938 19:37:20 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:04.938 19:37:20 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:44:04.938 19:37:20 -- common/autotest_common.sh@361 -- # mount=/ 00:44:04.938 19:37:20 -- common/autotest_common.sh@363 -- # target_space=10291875840 00:44:04.938 19:37:20 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:44:04.938 19:37:20 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:44:04.938 19:37:20 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:44:04.938 19:37:20 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:44:04.938 19:37:20 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:44:04.938 19:37:20 -- common/autotest_common.sh@370 -- # new_size=12522733568 00:44:04.938 19:37:20 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:44:04.938 19:37:20 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:44:04.938 19:37:20 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:44:04.938 19:37:20 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:04.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:04.938 19:37:20 -- common/autotest_common.sh@378 -- # return 0 00:44:04.938 19:37:20 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:44:04.938 19:37:20 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:44:04.938 19:37:20 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:44:04.938 19:37:20 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:44:04.938 19:37:20 -- common/autotest_common.sh@1673 -- # true 00:44:04.938 19:37:20 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:44:04.938 19:37:20 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:44:04.938 19:37:20 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:44:04.938 19:37:20 -- common/autotest_common.sh@27 -- # exec 00:44:04.938 19:37:20 -- common/autotest_common.sh@29 -- # exec 00:44:04.938 19:37:20 -- common/autotest_common.sh@31 -- # xtrace_restore 00:44:04.938 19:37:20 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:44:04.938 19:37:20 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:44:04.938 19:37:20 -- common/autotest_common.sh@18 -- # set -x 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:44:04.938 19:37:20 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:44:04.938 19:37:20 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:44:04.938 19:37:20 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=144194 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 144194 /var/tmp/spdk.sock 00:44:04.938 19:37:20 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:44:04.938 19:37:20 -- common/autotest_common.sh@817 -- # '[' -z 144194 ']' 00:44:04.938 19:37:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:04.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:04.938 19:37:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:04.938 19:37:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:04.938 19:37:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:04.938 19:37:20 -- common/autotest_common.sh@10 -- # set +x 00:44:04.938 [2024-04-18 19:37:20.691485] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:04.938 [2024-04-18 19:37:20.691634] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144194 ] 00:44:05.196 [2024-04-18 19:37:20.869083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:05.455 [2024-04-18 19:37:21.147951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:05.455 [2024-04-18 19:37:21.147984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:05.455 [2024-04-18 19:37:21.147983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:05.713 [2024-04-18 19:37:21.495176] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:06.027 19:37:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:06.027 19:37:21 -- common/autotest_common.sh@850 -- # return 0 00:44:06.027 19:37:21 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:44:06.027 19:37:21 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:06.285 Malloc0 00:44:06.285 Malloc1 00:44:06.285 Malloc2 00:44:06.285 19:37:21 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:44:06.285 19:37:22 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:44:06.285 19:37:22 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:44:06.285 19:37:22 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:44:06.285 5000+0 records in 00:44:06.285 5000+0 records out 00:44:06.285 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0220639 s, 464 MB/s 00:44:06.285 19:37:22 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:44:06.543 AIO0 00:44:06.543 19:37:22 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 144194 00:44:06.543 19:37:22 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 144194 without_thd 00:44:06.543 19:37:22 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=144194 00:44:06.543 19:37:22 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:44:06.543 19:37:22 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:44:06.543 19:37:22 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:44:06.543 19:37:22 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:44:06.543 19:37:22 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:44:06.543 19:37:22 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:44:06.543 19:37:22 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:44:06.543 19:37:22 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:44:06.543 19:37:22 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:44:06.801 19:37:22 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:44:06.801 19:37:22 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:44:06.801 19:37:22 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:44:06.801 19:37:22 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:44:06.801 19:37:22 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:44:06.801 19:37:22 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:44:06.801 19:37:22 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:44:06.801 19:37:22 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:44:06.801 19:37:22 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:44:07.058 19:37:22 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:44:07.058 19:37:22 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:44:07.058 spdk_thread ids are 1 on reactor0. 00:44:07.058 19:37:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:44:07.058 19:37:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144194 0 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144194 0 idle 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@33 -- # local pid=144194 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144194 -w 256 00:44:07.058 19:37:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144194 root 20 0 20.1t 146048 28804 S 0.0 1.2 0:00.90 reactor_0' 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@48 -- # echo 144194 root 20 0 20.1t 146048 28804 S 0.0 1.2 0:00.90 reactor_0 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:07.317 19:37:23 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:44:07.317 19:37:23 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144194 1 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144194 1 idle 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@33 -- # local pid=144194 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:44:07.317 19:37:23 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144194 -w 256 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144197 root 20 0 20.1t 146048 28804 S 0.0 1.2 0:00.00 reactor_1' 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@48 -- # echo 144197 root 20 0 20.1t 146048 28804 S 0.0 1.2 0:00.00 reactor_1 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:07.318 19:37:23 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:44:07.318 19:37:23 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144194 2 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144194 2 idle 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@33 -- # local pid=144194 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144194 -w 256 00:44:07.318 19:37:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144198 root 20 0 20.1t 146048 28804 S 0.0 1.2 0:00.00 reactor_2' 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@48 -- # echo 144198 root 20 0 20.1t 146048 28804 S 0.0 1.2 0:00.00 reactor_2 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:07.576 19:37:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:07.576 19:37:23 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:44:07.576 19:37:23 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:44:07.576 19:37:23 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:44:07.835 [2024-04-18 19:37:23.641383] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:07.835 19:37:23 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:44:08.093 [2024-04-18 19:37:23.953138] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:44:08.093 [2024-04-18 19:37:23.953716] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:44:08.093 19:37:23 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:44:08.351 [2024-04-18 19:37:24.257157] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:44:08.351 [2024-04-18 19:37:24.257783] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:44:08.609 19:37:24 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:44:08.609 19:37:24 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 144194 0 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 144194 0 busy 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@33 -- # local pid=144194 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144194 -w 256 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144194 root 20 0 20.1t 146160 28804 R 99.9 1.2 0:01.40 reactor_0' 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@48 -- # echo 144194 root 20 0 20.1t 146160 28804 R 99.9 1.2 0:01.40 reactor_0 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:44:08.609 19:37:24 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:08.610 19:37:24 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:44:08.610 19:37:24 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 144194 2 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 144194 2 busy 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@33 -- # local pid=144194 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:44:08.610 19:37:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144194 -w 256 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144198 root 20 0 20.1t 146160 28804 R 99.9 1.2 0:00.34 reactor_2' 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@48 -- # echo 144198 root 20 0 20.1t 146160 28804 R 99.9 1.2 0:00.34 reactor_2 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:44:08.868 19:37:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:08.868 19:37:24 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:44:09.126 [2024-04-18 19:37:24.849163] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:44:09.126 [2024-04-18 19:37:24.851518] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:44:09.126 19:37:24 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:44:09.126 19:37:24 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 144194 2 00:44:09.126 19:37:24 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144194 2 idle 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@33 -- # local pid=144194 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144194 -w 256 00:44:09.127 19:37:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144198 root 20 0 20.1t 146224 28804 S 0.0 1.2 0:00.58 reactor_2' 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@48 -- # echo 144198 root 20 0 20.1t 146224 28804 S 0.0 1.2 0:00.58 reactor_2 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:09.127 19:37:25 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:09.127 19:37:25 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:44:09.385 [2024-04-18 19:37:25.257092] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:44:09.385 [2024-04-18 19:37:25.258238] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:44:09.385 19:37:25 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:44:09.385 19:37:25 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:44:09.385 19:37:25 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:44:09.643 [2024-04-18 19:37:25.521260] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:09.643 19:37:25 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 144194 0 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144194 0 idle 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@33 -- # local pid=144194 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144194 -w 256 00:44:09.643 19:37:25 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144194 root 20 0 20.1t 146320 28804 S 0.0 1.2 0:02.22 reactor_0' 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@48 -- # echo 144194 root 20 0 20.1t 146320 28804 S 0.0 1.2 0:02.22 reactor_0 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:09.901 19:37:25 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:09.901 19:37:25 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:44:09.901 19:37:25 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:44:09.901 19:37:25 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:44:09.901 19:37:25 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 144194 00:44:09.901 19:37:25 -- common/autotest_common.sh@936 -- # '[' -z 144194 ']' 00:44:09.901 19:37:25 -- common/autotest_common.sh@940 -- # kill -0 144194 00:44:09.901 19:37:25 -- common/autotest_common.sh@941 -- # uname 00:44:09.901 19:37:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:09.901 19:37:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144194 00:44:09.901 19:37:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:09.901 19:37:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:09.901 19:37:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144194' 00:44:09.901 killing process with pid 144194 00:44:09.901 19:37:25 -- common/autotest_common.sh@955 -- # kill 144194 00:44:09.901 19:37:25 -- common/autotest_common.sh@960 -- # wait 144194 00:44:11.831 19:37:27 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:44:11.831 19:37:27 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:44:11.831 19:37:27 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:44:11.831 19:37:27 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:11.831 19:37:27 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:44:11.831 19:37:27 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=144350 00:44:11.831 19:37:27 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:44:11.831 19:37:27 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:11.831 19:37:27 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 144350 /var/tmp/spdk.sock 00:44:11.831 19:37:27 -- common/autotest_common.sh@817 -- # '[' -z 144350 ']' 00:44:11.831 19:37:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:11.831 19:37:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:11.831 19:37:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:11.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:11.831 19:37:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:11.831 19:37:27 -- common/autotest_common.sh@10 -- # set +x 00:44:11.831 [2024-04-18 19:37:27.721793] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:11.831 [2024-04-18 19:37:27.722007] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144350 ] 00:44:12.089 [2024-04-18 19:37:27.905726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:12.346 [2024-04-18 19:37:28.188810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:12.346 [2024-04-18 19:37:28.188880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:12.346 [2024-04-18 19:37:28.188903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:12.913 [2024-04-18 19:37:28.560007] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:12.913 19:37:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:12.913 19:37:28 -- common/autotest_common.sh@850 -- # return 0 00:44:12.913 19:37:28 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:44:12.913 19:37:28 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:13.540 Malloc0 00:44:13.540 Malloc1 00:44:13.540 Malloc2 00:44:13.540 19:37:29 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:44:13.540 5000+0 records in 00:44:13.540 5000+0 records out 00:44:13.540 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0289859 s, 353 MB/s 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:44:13.540 AIO0 00:44:13.540 19:37:29 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 144350 00:44:13.540 19:37:29 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 144350 00:44:13.540 19:37:29 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=144350 00:44:13.540 19:37:29 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:44:13.540 19:37:29 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:44:13.540 19:37:29 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:44:13.540 19:37:29 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:44:13.799 19:37:29 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:44:13.799 19:37:29 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:44:13.799 19:37:29 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:44:13.799 19:37:29 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:44:13.799 19:37:29 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:44:13.799 19:37:29 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:44:13.799 19:37:29 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:44:13.799 19:37:29 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:44:13.799 19:37:29 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:44:14.059 spdk_thread ids are 1 on reactor0. 00:44:14.059 19:37:29 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:44:14.059 19:37:29 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:44:14.059 19:37:29 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:44:14.059 19:37:29 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144350 0 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144350 0 idle 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@33 -- # local pid=144350 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144350 -w 256 00:44:14.059 19:37:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144350 root 20 0 20.1t 146424 29240 S 0.0 1.2 0:00.96 reactor_0' 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@48 -- # echo 144350 root 20 0 20.1t 146424 29240 S 0.0 1.2 0:00.96 reactor_0 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:14.318 19:37:30 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:44:14.318 19:37:30 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144350 1 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144350 1 idle 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@33 -- # local pid=144350 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144350 -w 256 00:44:14.318 19:37:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144360 root 20 0 20.1t 146424 29240 S 0.0 1.2 0:00.00 reactor_1' 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@48 -- # echo 144360 root 20 0 20.1t 146424 29240 S 0.0 1.2 0:00.00 reactor_1 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:14.576 19:37:30 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:44:14.576 19:37:30 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 144350 2 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144350 2 idle 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@33 -- # local pid=144350 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:14.576 19:37:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144350 -w 256 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144361 root 20 0 20.1t 146424 29240 S 0.0 1.2 0:00.00 reactor_2' 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@48 -- # echo 144361 root 20 0 20.1t 146424 29240 S 0.0 1.2 0:00.00 reactor_2 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:14.577 19:37:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:14.577 19:37:30 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:44:14.577 19:37:30 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:44:14.834 [2024-04-18 19:37:30.742497] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:44:14.834 [2024-04-18 19:37:30.742733] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:44:14.834 [2024-04-18 19:37:30.743510] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:44:15.091 19:37:30 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:44:15.349 [2024-04-18 19:37:31.030510] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:44:15.349 [2024-04-18 19:37:31.031333] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:44:15.349 19:37:31 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:44:15.349 19:37:31 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 144350 0 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 144350 0 busy 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@33 -- # local pid=144350 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144350 -w 256 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144350 root 20 0 20.1t 146492 29240 R 99.9 1.2 0:01.43 reactor_0' 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@48 -- # echo 144350 root 20 0 20.1t 146492 29240 R 99.9 1.2 0:01.43 reactor_0 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:15.349 19:37:31 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:44:15.349 19:37:31 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 144350 2 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 144350 2 busy 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@33 -- # local pid=144350 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144350 -w 256 00:44:15.349 19:37:31 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144361 root 20 0 20.1t 146492 29240 R 99.9 1.2 0:00.34 reactor_2' 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@48 -- # echo 144361 root 20 0 20.1t 146492 29240 R 99.9 1.2 0:00.34 reactor_2 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:44:15.607 19:37:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:15.607 19:37:31 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:44:15.864 [2024-04-18 19:37:31.674696] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:44:15.864 [2024-04-18 19:37:31.675068] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:44:15.864 19:37:31 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:44:15.864 19:37:31 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 144350 2 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144350 2 idle 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@33 -- # local pid=144350 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:44:15.864 19:37:31 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144350 -w 256 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144361 root 20 0 20.1t 146560 29240 S 0.0 1.2 0:00.64 reactor_2' 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@48 -- # echo 144361 root 20 0 20.1t 146560 29240 S 0.0 1.2 0:00.64 reactor_2 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:16.122 19:37:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:16.122 19:37:31 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:44:16.380 [2024-04-18 19:37:32.102748] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:44:16.380 [2024-04-18 19:37:32.103658] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:44:16.380 [2024-04-18 19:37:32.103712] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:44:16.380 19:37:32 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:44:16.380 19:37:32 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 144350 0 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 144350 0 idle 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@33 -- # local pid=144350 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@41 -- # hash top 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 144350 -w 256 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 144350 root 20 0 20.1t 146600 29240 S 0.0 1.2 0:02.33 reactor_0' 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@48 -- # echo 144350 root 20 0 20.1t 146600 29240 S 0.0 1.2 0:02.33 reactor_0 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:44:16.380 19:37:32 -- interrupt/interrupt_common.sh@56 -- # return 0 00:44:16.380 19:37:32 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:44:16.380 19:37:32 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:44:16.380 19:37:32 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:44:16.380 19:37:32 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 144350 00:44:16.380 19:37:32 -- common/autotest_common.sh@936 -- # '[' -z 144350 ']' 00:44:16.380 19:37:32 -- common/autotest_common.sh@940 -- # kill -0 144350 00:44:16.380 19:37:32 -- common/autotest_common.sh@941 -- # uname 00:44:16.380 19:37:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:16.380 19:37:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144350 00:44:16.639 killing process with pid 144350 00:44:16.639 19:37:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:16.639 19:37:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:16.639 19:37:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144350' 00:44:16.639 19:37:32 -- common/autotest_common.sh@955 -- # kill 144350 00:44:16.639 19:37:32 -- common/autotest_common.sh@960 -- # wait 144350 00:44:18.537 19:37:34 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:44:18.537 19:37:34 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:44:18.537 ************************************ 00:44:18.537 END TEST reactor_set_interrupt 00:44:18.537 ************************************ 00:44:18.537 00:44:18.537 real 0m13.719s 00:44:18.537 user 0m14.551s 00:44:18.537 sys 0m1.752s 00:44:18.537 19:37:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:18.537 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:44:18.537 19:37:34 -- spdk/autotest.sh@190 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:44:18.537 19:37:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:44:18.537 19:37:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:18.537 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:44:18.537 ************************************ 00:44:18.537 START TEST reap_unregistered_poller 00:44:18.537 ************************************ 00:44:18.537 19:37:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:44:18.537 * Looking for test storage... 00:44:18.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:18.537 19:37:34 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:44:18.537 19:37:34 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:44:18.537 19:37:34 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:18.537 19:37:34 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:44:18.537 19:37:34 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:44:18.537 19:37:34 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:18.537 19:37:34 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:44:18.537 19:37:34 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:44:18.537 19:37:34 -- common/autotest_common.sh@34 -- # set -e 00:44:18.537 19:37:34 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:44:18.537 19:37:34 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:44:18.537 19:37:34 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:44:18.537 19:37:34 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:44:18.537 19:37:34 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:44:18.537 19:37:34 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:44:18.537 19:37:34 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:44:18.537 19:37:34 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:44:18.537 19:37:34 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:44:18.537 19:37:34 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:44:18.537 19:37:34 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:44:18.537 19:37:34 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:44:18.537 19:37:34 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:44:18.537 19:37:34 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:44:18.537 19:37:34 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:44:18.537 19:37:34 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:44:18.537 19:37:34 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:44:18.537 19:37:34 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:44:18.537 19:37:34 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:44:18.537 19:37:34 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:44:18.537 19:37:34 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:44:18.537 19:37:34 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:44:18.537 19:37:34 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:44:18.537 19:37:34 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:44:18.537 19:37:34 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:44:18.537 19:37:34 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:44:18.537 19:37:34 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:44:18.537 19:37:34 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:44:18.537 19:37:34 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:44:18.537 19:37:34 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:44:18.537 19:37:34 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:44:18.537 19:37:34 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:44:18.537 19:37:34 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:44:18.537 19:37:34 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:44:18.537 19:37:34 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:44:18.537 19:37:34 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:44:18.537 19:37:34 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:44:18.537 19:37:34 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:44:18.537 19:37:34 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:44:18.537 19:37:34 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:44:18.537 19:37:34 -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:44:18.537 19:37:34 -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:44:18.537 19:37:34 -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:44:18.537 19:37:34 -- common/build_config.sh@39 -- # CONFIG_ASAN=y 00:44:18.537 19:37:34 -- common/build_config.sh@40 -- # CONFIG_SHARED=n 00:44:18.537 19:37:34 -- common/build_config.sh@41 -- # CONFIG_VTUNE_DIR= 00:44:18.537 19:37:34 -- common/build_config.sh@42 -- # CONFIG_RDMA_SET_TOS=y 00:44:18.537 19:37:34 -- common/build_config.sh@43 -- # CONFIG_VBDEV_COMPRESS=n 00:44:18.537 19:37:34 -- common/build_config.sh@44 -- # CONFIG_VFIO_USER_DIR= 00:44:18.537 19:37:34 -- common/build_config.sh@45 -- # CONFIG_PGO_DIR= 00:44:18.537 19:37:34 -- common/build_config.sh@46 -- # CONFIG_FUZZER_LIB= 00:44:18.537 19:37:34 -- common/build_config.sh@47 -- # CONFIG_HAVE_EXECINFO_H=y 00:44:18.537 19:37:34 -- common/build_config.sh@48 -- # CONFIG_USDT=n 00:44:18.537 19:37:34 -- common/build_config.sh@49 -- # CONFIG_HAVE_KEYUTILS=y 00:44:18.537 19:37:34 -- common/build_config.sh@50 -- # CONFIG_URING_ZNS=n 00:44:18.537 19:37:34 -- common/build_config.sh@51 -- # CONFIG_FC_PATH= 00:44:18.537 19:37:34 -- common/build_config.sh@52 -- # CONFIG_COVERAGE=y 00:44:18.537 19:37:34 -- common/build_config.sh@53 -- # CONFIG_CUSTOMOCF=n 00:44:18.537 19:37:34 -- common/build_config.sh@54 -- # CONFIG_DPDK_PKG_CONFIG=n 00:44:18.537 19:37:34 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:44:18.537 19:37:34 -- common/build_config.sh@56 -- # CONFIG_DEBUG=y 00:44:18.537 19:37:34 -- common/build_config.sh@57 -- # CONFIG_RDMA=y 00:44:18.537 19:37:34 -- common/build_config.sh@58 -- # CONFIG_HAVE_ARC4RANDOM=n 00:44:18.537 19:37:34 -- common/build_config.sh@59 -- # CONFIG_FUZZER=n 00:44:18.537 19:37:34 -- common/build_config.sh@60 -- # CONFIG_FC=n 00:44:18.537 19:37:34 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:44:18.537 19:37:34 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBARCHIVE=n 00:44:18.537 19:37:34 -- common/build_config.sh@63 -- # CONFIG_DPDK_COMPRESSDEV=n 00:44:18.537 19:37:34 -- common/build_config.sh@64 -- # CONFIG_CROSS_PREFIX= 00:44:18.537 19:37:34 -- common/build_config.sh@65 -- # CONFIG_PREFIX=/usr/local 00:44:18.537 19:37:34 -- common/build_config.sh@66 -- # CONFIG_HAVE_LIBBSD=n 00:44:18.537 19:37:34 -- common/build_config.sh@67 -- # CONFIG_UBSAN=y 00:44:18.537 19:37:34 -- common/build_config.sh@68 -- # CONFIG_PGO_CAPTURE=n 00:44:18.537 19:37:34 -- common/build_config.sh@69 -- # CONFIG_UBLK=n 00:44:18.537 19:37:34 -- common/build_config.sh@70 -- # CONFIG_ISAL_CRYPTO=y 00:44:18.537 19:37:34 -- common/build_config.sh@71 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:44:18.537 19:37:34 -- common/build_config.sh@72 -- # CONFIG_CRYPTO=n 00:44:18.537 19:37:34 -- common/build_config.sh@73 -- # CONFIG_RBD=n 00:44:18.537 19:37:34 -- common/build_config.sh@74 -- # CONFIG_LIBDIR= 00:44:18.537 19:37:34 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB_DIR= 00:44:18.537 19:37:34 -- common/build_config.sh@76 -- # CONFIG_PGO_USE=n 00:44:18.537 19:37:34 -- common/build_config.sh@77 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:44:18.537 19:37:34 -- common/build_config.sh@78 -- # CONFIG_GOLANG=n 00:44:18.537 19:37:34 -- common/build_config.sh@79 -- # CONFIG_VHOST=y 00:44:18.537 19:37:34 -- common/build_config.sh@80 -- # CONFIG_IDXD=y 00:44:18.537 19:37:34 -- common/build_config.sh@81 -- # CONFIG_AVAHI=n 00:44:18.537 19:37:34 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:44:18.537 19:37:34 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:44:18.538 19:37:34 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:44:18.538 19:37:34 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:44:18.538 19:37:34 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:44:18.538 19:37:34 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:44:18.538 19:37:34 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:44:18.538 19:37:34 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:44:18.538 19:37:34 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:44:18.538 19:37:34 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:44:18.538 19:37:34 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:44:18.538 19:37:34 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:44:18.538 19:37:34 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:44:18.538 19:37:34 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:44:18.538 19:37:34 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:44:18.538 19:37:34 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:44:18.538 19:37:34 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:44:18.538 #define SPDK_CONFIG_H 00:44:18.538 #define SPDK_CONFIG_APPS 1 00:44:18.538 #define SPDK_CONFIG_ARCH native 00:44:18.538 #define SPDK_CONFIG_ASAN 1 00:44:18.538 #undef SPDK_CONFIG_AVAHI 00:44:18.538 #undef SPDK_CONFIG_CET 00:44:18.538 #define SPDK_CONFIG_COVERAGE 1 00:44:18.538 #define SPDK_CONFIG_CROSS_PREFIX 00:44:18.538 #undef SPDK_CONFIG_CRYPTO 00:44:18.538 #undef SPDK_CONFIG_CRYPTO_MLX5 00:44:18.538 #undef SPDK_CONFIG_CUSTOMOCF 00:44:18.538 #undef SPDK_CONFIG_DAOS 00:44:18.538 #define SPDK_CONFIG_DAOS_DIR 00:44:18.538 #define SPDK_CONFIG_DEBUG 1 00:44:18.538 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:44:18.538 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:44:18.538 #define SPDK_CONFIG_DPDK_INC_DIR 00:44:18.538 #define SPDK_CONFIG_DPDK_LIB_DIR 00:44:18.538 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:44:18.538 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:44:18.538 #define SPDK_CONFIG_EXAMPLES 1 00:44:18.538 #undef SPDK_CONFIG_FC 00:44:18.538 #define SPDK_CONFIG_FC_PATH 00:44:18.538 #define SPDK_CONFIG_FIO_PLUGIN 1 00:44:18.538 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:44:18.538 #undef SPDK_CONFIG_FUSE 00:44:18.538 #undef SPDK_CONFIG_FUZZER 00:44:18.538 #define SPDK_CONFIG_FUZZER_LIB 00:44:18.538 #undef SPDK_CONFIG_GOLANG 00:44:18.538 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:44:18.538 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:44:18.538 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:44:18.538 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:44:18.538 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:44:18.538 #undef SPDK_CONFIG_HAVE_LIBBSD 00:44:18.538 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:44:18.538 #define SPDK_CONFIG_IDXD 1 00:44:18.538 #undef SPDK_CONFIG_IDXD_KERNEL 00:44:18.538 #undef SPDK_CONFIG_IPSEC_MB 00:44:18.538 #define SPDK_CONFIG_IPSEC_MB_DIR 00:44:18.538 #define SPDK_CONFIG_ISAL 1 00:44:18.538 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:44:18.538 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:44:18.538 #define SPDK_CONFIG_LIBDIR 00:44:18.538 #undef SPDK_CONFIG_LTO 00:44:18.538 #define SPDK_CONFIG_MAX_LCORES 00:44:18.538 #define SPDK_CONFIG_NVME_CUSE 1 00:44:18.538 #undef SPDK_CONFIG_OCF 00:44:18.538 #define SPDK_CONFIG_OCF_PATH 00:44:18.538 #define SPDK_CONFIG_OPENSSL_PATH 00:44:18.538 #undef SPDK_CONFIG_PGO_CAPTURE 00:44:18.538 #define SPDK_CONFIG_PGO_DIR 00:44:18.538 #undef SPDK_CONFIG_PGO_USE 00:44:18.538 #define SPDK_CONFIG_PREFIX /usr/local 00:44:18.538 #define SPDK_CONFIG_RAID5F 1 00:44:18.538 #undef SPDK_CONFIG_RBD 00:44:18.538 #define SPDK_CONFIG_RDMA 1 00:44:18.538 #define SPDK_CONFIG_RDMA_PROV verbs 00:44:18.538 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:44:18.538 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:44:18.538 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:44:18.538 #undef SPDK_CONFIG_SHARED 00:44:18.538 #undef SPDK_CONFIG_SMA 00:44:18.538 #define SPDK_CONFIG_TESTS 1 00:44:18.538 #undef SPDK_CONFIG_TSAN 00:44:18.538 #undef SPDK_CONFIG_UBLK 00:44:18.538 #define SPDK_CONFIG_UBSAN 1 00:44:18.538 #define SPDK_CONFIG_UNIT_TESTS 1 00:44:18.538 #undef SPDK_CONFIG_URING 00:44:18.538 #define SPDK_CONFIG_URING_PATH 00:44:18.538 #undef SPDK_CONFIG_URING_ZNS 00:44:18.538 #undef SPDK_CONFIG_USDT 00:44:18.538 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:44:18.538 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:44:18.538 #undef SPDK_CONFIG_VFIO_USER 00:44:18.538 #define SPDK_CONFIG_VFIO_USER_DIR 00:44:18.538 #define SPDK_CONFIG_VHOST 1 00:44:18.538 #define SPDK_CONFIG_VIRTIO 1 00:44:18.538 #undef SPDK_CONFIG_VTUNE 00:44:18.538 #define SPDK_CONFIG_VTUNE_DIR 00:44:18.538 #define SPDK_CONFIG_WERROR 1 00:44:18.538 #define SPDK_CONFIG_WPDK_DIR 00:44:18.538 #undef SPDK_CONFIG_XNVME 00:44:18.538 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:44:18.538 19:37:34 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:44:18.538 19:37:34 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:18.538 19:37:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:18.538 19:37:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:18.538 19:37:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:18.538 19:37:34 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:18.538 19:37:34 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:18.538 19:37:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:18.538 19:37:34 -- paths/export.sh@5 -- # export PATH 00:44:18.538 19:37:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:18.538 19:37:34 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:44:18.538 19:37:34 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:44:18.538 19:37:34 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:44:18.538 19:37:34 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:44:18.538 19:37:34 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:44:18.538 19:37:34 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:44:18.538 19:37:34 -- pm/common@67 -- # TEST_TAG=N/A 00:44:18.538 19:37:34 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:44:18.538 19:37:34 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:44:18.538 19:37:34 -- pm/common@71 -- # uname -s 00:44:18.538 19:37:34 -- pm/common@71 -- # PM_OS=Linux 00:44:18.538 19:37:34 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:44:18.538 19:37:34 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:44:18.538 19:37:34 -- pm/common@76 -- # [[ Linux == Linux ]] 00:44:18.538 19:37:34 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:44:18.538 19:37:34 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:44:18.538 19:37:34 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:44:18.538 19:37:34 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:44:18.538 19:37:34 -- common/autotest_common.sh@57 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:44:18.538 19:37:34 -- common/autotest_common.sh@61 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:44:18.538 19:37:34 -- common/autotest_common.sh@63 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:44:18.538 19:37:34 -- common/autotest_common.sh@65 -- # : 1 00:44:18.538 19:37:34 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:44:18.538 19:37:34 -- common/autotest_common.sh@67 -- # : 1 00:44:18.538 19:37:34 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:44:18.538 19:37:34 -- common/autotest_common.sh@69 -- # : 00:44:18.538 19:37:34 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:44:18.538 19:37:34 -- common/autotest_common.sh@71 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:44:18.538 19:37:34 -- common/autotest_common.sh@73 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:44:18.538 19:37:34 -- common/autotest_common.sh@75 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:44:18.538 19:37:34 -- common/autotest_common.sh@77 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:44:18.538 19:37:34 -- common/autotest_common.sh@79 -- # : 1 00:44:18.538 19:37:34 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:44:18.538 19:37:34 -- common/autotest_common.sh@81 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:44:18.538 19:37:34 -- common/autotest_common.sh@83 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:44:18.538 19:37:34 -- common/autotest_common.sh@85 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:44:18.538 19:37:34 -- common/autotest_common.sh@87 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:44:18.538 19:37:34 -- common/autotest_common.sh@89 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:44:18.538 19:37:34 -- common/autotest_common.sh@91 -- # : 0 00:44:18.538 19:37:34 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:44:18.538 19:37:34 -- common/autotest_common.sh@93 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:44:18.539 19:37:34 -- common/autotest_common.sh@95 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:44:18.539 19:37:34 -- common/autotest_common.sh@97 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:44:18.539 19:37:34 -- common/autotest_common.sh@99 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:44:18.539 19:37:34 -- common/autotest_common.sh@101 -- # : rdma 00:44:18.539 19:37:34 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:44:18.539 19:37:34 -- common/autotest_common.sh@103 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:44:18.539 19:37:34 -- common/autotest_common.sh@105 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:44:18.539 19:37:34 -- common/autotest_common.sh@107 -- # : 1 00:44:18.539 19:37:34 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:44:18.539 19:37:34 -- common/autotest_common.sh@109 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:44:18.539 19:37:34 -- common/autotest_common.sh@111 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:44:18.539 19:37:34 -- common/autotest_common.sh@113 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:44:18.539 19:37:34 -- common/autotest_common.sh@115 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:44:18.539 19:37:34 -- common/autotest_common.sh@117 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:44:18.539 19:37:34 -- common/autotest_common.sh@119 -- # : 1 00:44:18.539 19:37:34 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:44:18.539 19:37:34 -- common/autotest_common.sh@121 -- # : 1 00:44:18.539 19:37:34 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:44:18.539 19:37:34 -- common/autotest_common.sh@123 -- # : 00:44:18.539 19:37:34 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:44:18.539 19:37:34 -- common/autotest_common.sh@125 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:44:18.539 19:37:34 -- common/autotest_common.sh@127 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:44:18.539 19:37:34 -- common/autotest_common.sh@129 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:44:18.539 19:37:34 -- common/autotest_common.sh@131 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:44:18.539 19:37:34 -- common/autotest_common.sh@133 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:44:18.539 19:37:34 -- common/autotest_common.sh@135 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:44:18.539 19:37:34 -- common/autotest_common.sh@137 -- # : 00:44:18.539 19:37:34 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:44:18.539 19:37:34 -- common/autotest_common.sh@139 -- # : true 00:44:18.539 19:37:34 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:44:18.539 19:37:34 -- common/autotest_common.sh@141 -- # : 1 00:44:18.539 19:37:34 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:44:18.539 19:37:34 -- common/autotest_common.sh@143 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:44:18.539 19:37:34 -- common/autotest_common.sh@145 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:44:18.539 19:37:34 -- common/autotest_common.sh@147 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:44:18.539 19:37:34 -- common/autotest_common.sh@149 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:44:18.539 19:37:34 -- common/autotest_common.sh@151 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:44:18.539 19:37:34 -- common/autotest_common.sh@153 -- # : 00:44:18.539 19:37:34 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:44:18.539 19:37:34 -- common/autotest_common.sh@155 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:44:18.539 19:37:34 -- common/autotest_common.sh@157 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:44:18.539 19:37:34 -- common/autotest_common.sh@159 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:44:18.539 19:37:34 -- common/autotest_common.sh@161 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:44:18.539 19:37:34 -- common/autotest_common.sh@163 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:44:18.539 19:37:34 -- common/autotest_common.sh@166 -- # : 00:44:18.539 19:37:34 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:44:18.539 19:37:34 -- common/autotest_common.sh@168 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:44:18.539 19:37:34 -- common/autotest_common.sh@170 -- # : 0 00:44:18.539 19:37:34 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:44:18.539 19:37:34 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:44:18.539 19:37:34 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:44:18.539 19:37:34 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:44:18.539 19:37:34 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:44:18.539 19:37:34 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:18.539 19:37:34 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:18.539 19:37:34 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:18.539 19:37:34 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:18.539 19:37:34 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:44:18.539 19:37:34 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:44:18.539 19:37:34 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:44:18.539 19:37:34 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:44:18.539 19:37:34 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:44:18.539 19:37:34 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:44:18.539 19:37:34 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:44:18.539 19:37:34 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:44:18.539 19:37:34 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:44:18.539 19:37:34 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:44:18.539 19:37:34 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:44:18.539 19:37:34 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:44:18.539 19:37:34 -- common/autotest_common.sh@199 -- # cat 00:44:18.539 19:37:34 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:44:18.539 19:37:34 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:44:18.539 19:37:34 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:44:18.539 19:37:34 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:44:18.539 19:37:34 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:44:18.539 19:37:34 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:44:18.539 19:37:34 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:44:18.539 19:37:34 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:44:18.539 19:37:34 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:44:18.539 19:37:34 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:44:18.539 19:37:34 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:44:18.539 19:37:34 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:44:18.539 19:37:34 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:44:18.539 19:37:34 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:44:18.539 19:37:34 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:44:18.539 19:37:34 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:44:18.539 19:37:34 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:44:18.539 19:37:34 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:44:18.539 19:37:34 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:44:18.539 19:37:34 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:44:18.539 19:37:34 -- common/autotest_common.sh@252 -- # export valgrind= 00:44:18.539 19:37:34 -- common/autotest_common.sh@252 -- # valgrind= 00:44:18.539 19:37:34 -- common/autotest_common.sh@258 -- # uname -s 00:44:18.539 19:37:34 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:44:18.539 19:37:34 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:44:18.539 19:37:34 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:44:18.539 19:37:34 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:44:18.539 19:37:34 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:44:18.539 19:37:34 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:44:18.539 19:37:34 -- common/autotest_common.sh@268 -- # MAKE=make 00:44:18.539 19:37:34 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:44:18.539 19:37:34 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:44:18.540 19:37:34 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:44:18.540 19:37:34 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:44:18.540 19:37:34 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:44:18.540 19:37:34 -- common/autotest_common.sh@307 -- # [[ -z 144555 ]] 00:44:18.540 19:37:34 -- common/autotest_common.sh@307 -- # kill -0 144555 00:44:18.540 19:37:34 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:44:18.540 19:37:34 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:44:18.540 19:37:34 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:44:18.540 19:37:34 -- common/autotest_common.sh@320 -- # local mount target_dir 00:44:18.540 19:37:34 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:44:18.540 19:37:34 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:44:18.540 19:37:34 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:44:18.540 19:37:34 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:44:18.540 19:37:34 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.QcwIIz 00:44:18.540 19:37:34 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:44:18.540 19:37:34 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:44:18.540 19:37:34 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:44:18.540 19:37:34 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.QcwIIz/tests/interrupt /tmp/spdk.QcwIIz 00:44:18.540 19:37:34 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@316 -- # df -T 00:44:18.540 19:37:34 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=udev 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=6224465920 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6224465920 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=1249759232 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1254514688 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=4755456 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=10291838976 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=10308177920 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=6269952000 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6272565248 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=6272565248 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6272565248 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop0 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=67108864 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=103089152 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109422592 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop2 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=41025536 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=41025536 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop1 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=96337920 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=96337920 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=1254510592 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1254510592 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt/output 00:44:18.540 19:37:34 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # avails["$mount"]=90220208128 00:44:18.540 19:37:34 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:44:18.540 19:37:34 -- common/autotest_common.sh@352 -- # uses["$mount"]=9482571776 00:44:18.540 19:37:34 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:44:18.540 19:37:34 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:44:18.540 * Looking for test storage... 00:44:18.540 19:37:34 -- common/autotest_common.sh@357 -- # local target_space new_size 00:44:18.540 19:37:34 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:44:18.540 19:37:34 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:18.540 19:37:34 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:44:18.540 19:37:34 -- common/autotest_common.sh@361 -- # mount=/ 00:44:18.540 19:37:34 -- common/autotest_common.sh@363 -- # target_space=10291838976 00:44:18.540 19:37:34 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:44:18.540 19:37:34 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:44:18.540 19:37:34 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:44:18.540 19:37:34 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:44:18.540 19:37:34 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:44:18.540 19:37:34 -- common/autotest_common.sh@370 -- # new_size=12522770432 00:44:18.540 19:37:34 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:44:18.540 19:37:34 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:44:18.540 19:37:34 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:44:18.540 19:37:34 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:18.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:44:18.540 19:37:34 -- common/autotest_common.sh@378 -- # return 0 00:44:18.541 19:37:34 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:44:18.541 19:37:34 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:44:18.541 19:37:34 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:44:18.541 19:37:34 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:44:18.541 19:37:34 -- common/autotest_common.sh@1673 -- # true 00:44:18.541 19:37:34 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:44:18.541 19:37:34 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:44:18.541 19:37:34 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:44:18.541 19:37:34 -- common/autotest_common.sh@27 -- # exec 00:44:18.541 19:37:34 -- common/autotest_common.sh@29 -- # exec 00:44:18.541 19:37:34 -- common/autotest_common.sh@31 -- # xtrace_restore 00:44:18.541 19:37:34 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:44:18.541 19:37:34 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:44:18.541 19:37:34 -- common/autotest_common.sh@18 -- # set -x 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:44:18.541 19:37:34 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:44:18.541 19:37:34 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:44:18.541 19:37:34 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=144605 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:44:18.541 19:37:34 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 144605 /var/tmp/spdk.sock 00:44:18.541 19:37:34 -- common/autotest_common.sh@817 -- # '[' -z 144605 ']' 00:44:18.541 19:37:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:18.541 19:37:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:18.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:18.541 19:37:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:18.541 19:37:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:18.541 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:44:18.798 [2024-04-18 19:37:34.462980] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:18.798 [2024-04-18 19:37:34.463265] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144605 ] 00:44:18.798 [2024-04-18 19:37:34.653646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:19.056 [2024-04-18 19:37:34.880165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:19.056 [2024-04-18 19:37:34.880309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:19.056 [2024-04-18 19:37:34.880317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:19.313 [2024-04-18 19:37:35.231945] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:19.582 19:37:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:19.582 19:37:35 -- common/autotest_common.sh@850 -- # return 0 00:44:19.582 19:37:35 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:44:19.582 19:37:35 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:44:19.583 19:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:44:19.583 19:37:35 -- common/autotest_common.sh@10 -- # set +x 00:44:19.583 19:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:44:19.841 19:37:35 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:44:19.841 "name": "app_thread", 00:44:19.841 "id": 1, 00:44:19.841 "active_pollers": [], 00:44:19.841 "timed_pollers": [ 00:44:19.841 { 00:44:19.841 "name": "rpc_subsystem_poll_servers", 00:44:19.841 "id": 1, 00:44:19.841 "state": "waiting", 00:44:19.841 "run_count": 0, 00:44:19.841 "busy_count": 0, 00:44:19.841 "period_ticks": 8400000 00:44:19.841 } 00:44:19.841 ], 00:44:19.841 "paused_pollers": [] 00:44:19.841 }' 00:44:19.841 19:37:35 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:44:19.841 19:37:35 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:44:19.841 19:37:35 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:44:19.841 19:37:35 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:44:19.841 19:37:35 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:44:19.841 19:37:35 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:44:19.841 19:37:35 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:44:19.841 19:37:35 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:44:19.841 19:37:35 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:44:19.841 5000+0 records in 00:44:19.841 5000+0 records out 00:44:19.841 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0318743 s, 321 MB/s 00:44:19.841 19:37:35 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:44:20.100 AIO0 00:44:20.100 19:37:35 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:44:20.667 19:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:44:20.667 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:44:20.667 19:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:44:20.667 "name": "app_thread", 00:44:20.667 "id": 1, 00:44:20.667 "active_pollers": [], 00:44:20.667 "timed_pollers": [ 00:44:20.667 { 00:44:20.667 "name": "rpc_subsystem_poll_servers", 00:44:20.667 "id": 1, 00:44:20.667 "state": "waiting", 00:44:20.667 "run_count": 0, 00:44:20.667 "busy_count": 0, 00:44:20.667 "period_ticks": 8400000 00:44:20.667 } 00:44:20.667 ], 00:44:20.667 "paused_pollers": [] 00:44:20.667 }' 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:44:20.667 19:37:36 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 144605 00:44:20.667 19:37:36 -- common/autotest_common.sh@936 -- # '[' -z 144605 ']' 00:44:20.667 19:37:36 -- common/autotest_common.sh@940 -- # kill -0 144605 00:44:20.667 19:37:36 -- common/autotest_common.sh@941 -- # uname 00:44:20.667 19:37:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:20.667 19:37:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144605 00:44:20.667 19:37:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:20.937 killing process with pid 144605 00:44:20.937 19:37:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:20.937 19:37:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144605' 00:44:20.937 19:37:36 -- common/autotest_common.sh@955 -- # kill 144605 00:44:20.937 19:37:36 -- common/autotest_common.sh@960 -- # wait 144605 00:44:22.314 19:37:38 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:44:22.314 19:37:38 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:44:22.314 00:44:22.314 real 0m3.915s 00:44:22.314 user 0m3.509s 00:44:22.314 sys 0m0.545s 00:44:22.314 19:37:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:22.314 19:37:38 -- common/autotest_common.sh@10 -- # set +x 00:44:22.314 ************************************ 00:44:22.314 END TEST reap_unregistered_poller 00:44:22.314 ************************************ 00:44:22.314 19:37:38 -- spdk/autotest.sh@194 -- # uname -s 00:44:22.314 19:37:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:44:22.314 19:37:38 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:44:22.314 19:37:38 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:44:22.314 19:37:38 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:44:22.314 19:37:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:44:22.314 19:37:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:22.314 19:37:38 -- common/autotest_common.sh@10 -- # set +x 00:44:22.314 ************************************ 00:44:22.314 START TEST spdk_dd 00:44:22.314 ************************************ 00:44:22.314 19:37:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:44:22.573 * Looking for test storage... 00:44:22.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:44:22.573 19:37:38 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:22.573 19:37:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:22.573 19:37:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:22.573 19:37:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:22.573 19:37:38 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:22.573 19:37:38 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:22.573 19:37:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:22.573 19:37:38 -- paths/export.sh@5 -- # export PATH 00:44:22.573 19:37:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:22.573 19:37:38 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:22.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:22.830 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:44:23.768 19:37:39 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:44:23.768 19:37:39 -- dd/dd.sh@11 -- # nvme_in_userspace 00:44:23.768 19:37:39 -- scripts/common.sh@309 -- # local bdf bdfs 00:44:23.768 19:37:39 -- scripts/common.sh@310 -- # local nvmes 00:44:23.768 19:37:39 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:44:23.768 19:37:39 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:44:23.768 19:37:39 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:44:23.768 19:37:39 -- scripts/common.sh@295 -- # local bdf= 00:44:23.768 19:37:39 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:44:23.768 19:37:39 -- scripts/common.sh@230 -- # local class 00:44:23.768 19:37:39 -- scripts/common.sh@231 -- # local subclass 00:44:23.768 19:37:39 -- scripts/common.sh@232 -- # local progif 00:44:23.768 19:37:39 -- scripts/common.sh@233 -- # printf %02x 1 00:44:23.768 19:37:39 -- scripts/common.sh@233 -- # class=01 00:44:23.768 19:37:39 -- scripts/common.sh@234 -- # printf %02x 8 00:44:23.768 19:37:39 -- scripts/common.sh@234 -- # subclass=08 00:44:23.768 19:37:39 -- scripts/common.sh@235 -- # printf %02x 2 00:44:23.768 19:37:39 -- scripts/common.sh@235 -- # progif=02 00:44:23.768 19:37:39 -- scripts/common.sh@237 -- # hash lspci 00:44:23.768 19:37:39 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:44:23.768 19:37:39 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:44:23.768 19:37:39 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:44:23.768 19:37:39 -- scripts/common.sh@240 -- # grep -i -- -p02 00:44:23.768 19:37:39 -- scripts/common.sh@242 -- # tr -d '"' 00:44:23.768 19:37:39 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:44:23.768 19:37:39 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:44:23.768 19:37:39 -- scripts/common.sh@15 -- # local i 00:44:23.768 19:37:39 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:44:23.768 19:37:39 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:44:23.768 19:37:39 -- scripts/common.sh@24 -- # return 0 00:44:23.768 19:37:39 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:44:23.768 19:37:39 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:44:23.768 19:37:39 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:44:23.768 19:37:39 -- scripts/common.sh@320 -- # uname -s 00:44:23.768 19:37:39 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:44:23.768 19:37:39 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:44:23.768 19:37:39 -- scripts/common.sh@325 -- # (( 1 )) 00:44:23.768 19:37:39 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:44:23.768 19:37:39 -- dd/dd.sh@13 -- # check_liburing 00:44:23.768 19:37:39 -- dd/common.sh@139 -- # local lib so 00:44:23.768 19:37:39 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:44:23.768 19:37:39 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:44:23.768 19:37:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:44:23.768 19:37:39 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:44:23.768 19:37:39 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:44:23.768 19:37:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:44:23.768 19:37:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:23.768 19:37:39 -- common/autotest_common.sh@10 -- # set +x 00:44:23.768 ************************************ 00:44:23.768 START TEST spdk_dd_basic_rw 00:44:23.768 ************************************ 00:44:23.768 19:37:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:44:23.768 * Looking for test storage... 00:44:23.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:44:23.768 19:37:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:23.768 19:37:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:23.768 19:37:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:23.768 19:37:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:23.768 19:37:39 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:23.768 19:37:39 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:23.768 19:37:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:23.768 19:37:39 -- paths/export.sh@5 -- # export PATH 00:44:23.768 19:37:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:23.768 19:37:39 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:44:23.768 19:37:39 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:44:23.769 19:37:39 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:44:23.769 19:37:39 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:44:23.769 19:37:39 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:44:23.769 19:37:39 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:44:23.769 19:37:39 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:44:23.769 19:37:39 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:23.769 19:37:39 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:23.769 19:37:39 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:44:23.769 19:37:39 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:44:23.769 19:37:39 -- dd/common.sh@126 -- # mapfile -t id 00:44:23.769 19:37:39 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:44:24.029 19:37:39 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 7 Host Read Commands: 2206 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:44:24.029 19:37:39 -- dd/common.sh@130 -- # lbaf=04 00:44:24.288 19:37:39 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 7 Host Read Commands: 2206 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:44:24.288 19:37:39 -- dd/common.sh@132 -- # lbaf=4096 00:44:24.288 19:37:39 -- dd/common.sh@134 -- # echo 4096 00:44:24.288 19:37:39 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:44:24.289 19:37:39 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:44:24.289 19:37:39 -- dd/basic_rw.sh@96 -- # : 00:44:24.289 19:37:39 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:44:24.289 19:37:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:24.289 19:37:39 -- common/autotest_common.sh@10 -- # set +x 00:44:24.289 19:37:39 -- dd/basic_rw.sh@96 -- # gen_conf 00:44:24.289 19:37:39 -- dd/common.sh@31 -- # xtrace_disable 00:44:24.289 19:37:39 -- common/autotest_common.sh@10 -- # set +x 00:44:24.289 ************************************ 00:44:24.289 START TEST dd_bs_lt_native_bs 00:44:24.289 ************************************ 00:44:24.289 19:37:39 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:44:24.289 19:37:39 -- common/autotest_common.sh@638 -- # local es=0 00:44:24.289 19:37:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:44:24.289 19:37:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:24.289 19:37:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:44:24.289 19:37:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:24.289 19:37:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:44:24.289 19:37:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:24.289 19:37:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:44:24.289 19:37:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:24.289 19:37:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:24.289 19:37:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:44:24.289 { 00:44:24.289 "subsystems": [ 00:44:24.289 { 00:44:24.289 "subsystem": "bdev", 00:44:24.289 "config": [ 00:44:24.289 { 00:44:24.289 "params": { 00:44:24.289 "trtype": "pcie", 00:44:24.289 "traddr": "0000:00:10.0", 00:44:24.289 "name": "Nvme0" 00:44:24.289 }, 00:44:24.289 "method": "bdev_nvme_attach_controller" 00:44:24.289 }, 00:44:24.289 { 00:44:24.289 "method": "bdev_wait_for_examine" 00:44:24.289 } 00:44:24.289 ] 00:44:24.289 } 00:44:24.289 ] 00:44:24.289 } 00:44:24.289 [2024-04-18 19:37:40.077874] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:24.289 [2024-04-18 19:37:40.078371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144960 ] 00:44:24.552 [2024-04-18 19:37:40.255557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:24.810 [2024-04-18 19:37:40.474434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:25.067 [2024-04-18 19:37:40.895854] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:44:25.067 [2024-04-18 19:37:40.895955] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:26.002 [2024-04-18 19:37:41.800024] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:26.568 19:37:42 -- common/autotest_common.sh@641 -- # es=234 00:44:26.568 19:37:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:44:26.568 19:37:42 -- common/autotest_common.sh@650 -- # es=106 00:44:26.568 19:37:42 -- common/autotest_common.sh@651 -- # case "$es" in 00:44:26.568 19:37:42 -- common/autotest_common.sh@658 -- # es=1 00:44:26.568 19:37:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:44:26.568 00:44:26.568 real 0m2.301s 00:44:26.568 user 0m2.043s 00:44:26.568 sys 0m0.223s 00:44:26.568 19:37:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:26.568 19:37:42 -- common/autotest_common.sh@10 -- # set +x 00:44:26.568 ************************************ 00:44:26.568 END TEST dd_bs_lt_native_bs 00:44:26.568 ************************************ 00:44:26.568 19:37:42 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:44:26.568 19:37:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:44:26.568 19:37:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:26.568 19:37:42 -- common/autotest_common.sh@10 -- # set +x 00:44:26.568 ************************************ 00:44:26.568 START TEST dd_rw 00:44:26.568 ************************************ 00:44:26.568 19:37:42 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:44:26.568 19:37:42 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:44:26.568 19:37:42 -- dd/basic_rw.sh@12 -- # local count size 00:44:26.568 19:37:42 -- dd/basic_rw.sh@13 -- # local qds bss 00:44:26.568 19:37:42 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:44:26.568 19:37:42 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:44:26.568 19:37:42 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:44:26.568 19:37:42 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:44:26.568 19:37:42 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:44:26.568 19:37:42 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:44:26.568 19:37:42 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:44:26.568 19:37:42 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:44:26.568 19:37:42 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:44:26.568 19:37:42 -- dd/basic_rw.sh@23 -- # count=15 00:44:26.568 19:37:42 -- dd/basic_rw.sh@24 -- # count=15 00:44:26.568 19:37:42 -- dd/basic_rw.sh@25 -- # size=61440 00:44:26.568 19:37:42 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:44:26.568 19:37:42 -- dd/common.sh@98 -- # xtrace_disable 00:44:26.568 19:37:42 -- common/autotest_common.sh@10 -- # set +x 00:44:27.135 19:37:43 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:44:27.135 19:37:43 -- dd/basic_rw.sh@30 -- # gen_conf 00:44:27.135 19:37:43 -- dd/common.sh@31 -- # xtrace_disable 00:44:27.135 19:37:43 -- common/autotest_common.sh@10 -- # set +x 00:44:27.403 { 00:44:27.403 "subsystems": [ 00:44:27.403 { 00:44:27.403 "subsystem": "bdev", 00:44:27.403 "config": [ 00:44:27.403 { 00:44:27.403 "params": { 00:44:27.403 "trtype": "pcie", 00:44:27.403 "traddr": "0000:00:10.0", 00:44:27.403 "name": "Nvme0" 00:44:27.403 }, 00:44:27.403 "method": "bdev_nvme_attach_controller" 00:44:27.403 }, 00:44:27.403 { 00:44:27.403 "method": "bdev_wait_for_examine" 00:44:27.403 } 00:44:27.403 ] 00:44:27.403 } 00:44:27.403 ] 00:44:27.403 } 00:44:27.403 [2024-04-18 19:37:43.069714] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:27.403 [2024-04-18 19:37:43.069876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145025 ] 00:44:27.403 [2024-04-18 19:37:43.234001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.663 [2024-04-18 19:37:43.525254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:29.666  Copying: 60/60 [kB] (average 29 MBps) 00:44:29.666 00:44:29.666 19:37:45 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:44:29.666 19:37:45 -- dd/basic_rw.sh@37 -- # gen_conf 00:44:29.666 19:37:45 -- dd/common.sh@31 -- # xtrace_disable 00:44:29.666 19:37:45 -- common/autotest_common.sh@10 -- # set +x 00:44:29.666 { 00:44:29.666 "subsystems": [ 00:44:29.666 { 00:44:29.666 "subsystem": "bdev", 00:44:29.666 "config": [ 00:44:29.666 { 00:44:29.666 "params": { 00:44:29.666 "trtype": "pcie", 00:44:29.666 "traddr": "0000:00:10.0", 00:44:29.666 "name": "Nvme0" 00:44:29.666 }, 00:44:29.666 "method": "bdev_nvme_attach_controller" 00:44:29.666 }, 00:44:29.666 { 00:44:29.666 "method": "bdev_wait_for_examine" 00:44:29.666 } 00:44:29.666 ] 00:44:29.666 } 00:44:29.666 ] 00:44:29.666 } 00:44:29.666 [2024-04-18 19:37:45.351222] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:29.666 [2024-04-18 19:37:45.351443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145056 ] 00:44:29.666 [2024-04-18 19:37:45.512450] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:29.924 [2024-04-18 19:37:45.743232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:31.897  Copying: 60/60 [kB] (average 29 MBps) 00:44:31.897 00:44:31.897 19:37:47 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:31.897 19:37:47 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:44:31.897 19:37:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:44:31.897 19:37:47 -- dd/common.sh@11 -- # local nvme_ref= 00:44:31.897 19:37:47 -- dd/common.sh@12 -- # local size=61440 00:44:31.897 19:37:47 -- dd/common.sh@14 -- # local bs=1048576 00:44:31.897 19:37:47 -- dd/common.sh@15 -- # local count=1 00:44:31.897 19:37:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:44:31.897 19:37:47 -- dd/common.sh@18 -- # gen_conf 00:44:31.897 19:37:47 -- dd/common.sh@31 -- # xtrace_disable 00:44:31.897 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:44:31.897 { 00:44:31.897 "subsystems": [ 00:44:31.897 { 00:44:31.897 "subsystem": "bdev", 00:44:31.897 "config": [ 00:44:31.897 { 00:44:31.897 "params": { 00:44:31.897 "trtype": "pcie", 00:44:31.897 "traddr": "0000:00:10.0", 00:44:31.897 "name": "Nvme0" 00:44:31.897 }, 00:44:31.897 "method": "bdev_nvme_attach_controller" 00:44:31.897 }, 00:44:31.897 { 00:44:31.897 "method": "bdev_wait_for_examine" 00:44:31.897 } 00:44:31.897 ] 00:44:31.897 } 00:44:31.897 ] 00:44:31.897 } 00:44:31.897 [2024-04-18 19:37:47.738084] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:31.897 [2024-04-18 19:37:47.738284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145085 ] 00:44:32.166 [2024-04-18 19:37:47.904117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:32.425 [2024-04-18 19:37:48.131240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:34.059  Copying: 1024/1024 [kB] (average 1000 MBps) 00:44:34.059 00:44:34.059 19:37:49 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:44:34.059 19:37:49 -- dd/basic_rw.sh@23 -- # count=15 00:44:34.059 19:37:49 -- dd/basic_rw.sh@24 -- # count=15 00:44:34.059 19:37:49 -- dd/basic_rw.sh@25 -- # size=61440 00:44:34.059 19:37:49 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:44:34.059 19:37:49 -- dd/common.sh@98 -- # xtrace_disable 00:44:34.059 19:37:49 -- common/autotest_common.sh@10 -- # set +x 00:44:34.993 19:37:50 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:44:34.993 19:37:50 -- dd/basic_rw.sh@30 -- # gen_conf 00:44:34.993 19:37:50 -- dd/common.sh@31 -- # xtrace_disable 00:44:34.993 19:37:50 -- common/autotest_common.sh@10 -- # set +x 00:44:34.993 { 00:44:34.993 "subsystems": [ 00:44:34.993 { 00:44:34.993 "subsystem": "bdev", 00:44:34.993 "config": [ 00:44:34.993 { 00:44:34.993 "params": { 00:44:34.993 "trtype": "pcie", 00:44:34.993 "traddr": "0000:00:10.0", 00:44:34.993 "name": "Nvme0" 00:44:34.993 }, 00:44:34.993 "method": "bdev_nvme_attach_controller" 00:44:34.993 }, 00:44:34.993 { 00:44:34.993 "method": "bdev_wait_for_examine" 00:44:34.993 } 00:44:34.993 ] 00:44:34.993 } 00:44:34.993 ] 00:44:34.993 } 00:44:34.993 [2024-04-18 19:37:50.714941] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:34.993 [2024-04-18 19:37:50.715135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145143 ] 00:44:34.993 [2024-04-18 19:37:50.884367] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:35.558 [2024-04-18 19:37:51.185916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:37.259  Copying: 60/60 [kB] (average 58 MBps) 00:44:37.259 00:44:37.259 19:37:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:44:37.259 19:37:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:44:37.259 19:37:53 -- dd/common.sh@31 -- # xtrace_disable 00:44:37.259 19:37:53 -- common/autotest_common.sh@10 -- # set +x 00:44:37.259 { 00:44:37.259 "subsystems": [ 00:44:37.259 { 00:44:37.259 "subsystem": "bdev", 00:44:37.259 "config": [ 00:44:37.259 { 00:44:37.259 "params": { 00:44:37.259 "trtype": "pcie", 00:44:37.259 "traddr": "0000:00:10.0", 00:44:37.259 "name": "Nvme0" 00:44:37.259 }, 00:44:37.259 "method": "bdev_nvme_attach_controller" 00:44:37.259 }, 00:44:37.259 { 00:44:37.259 "method": "bdev_wait_for_examine" 00:44:37.259 } 00:44:37.259 ] 00:44:37.259 } 00:44:37.259 ] 00:44:37.259 } 00:44:37.259 [2024-04-18 19:37:53.125789] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:37.259 [2024-04-18 19:37:53.126035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145171 ] 00:44:37.517 [2024-04-18 19:37:53.294332] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:37.774 [2024-04-18 19:37:53.550547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:39.715  Copying: 60/60 [kB] (average 58 MBps) 00:44:39.715 00:44:39.715 19:37:55 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:39.715 19:37:55 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:44:39.715 19:37:55 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:44:39.715 19:37:55 -- dd/common.sh@11 -- # local nvme_ref= 00:44:39.715 19:37:55 -- dd/common.sh@12 -- # local size=61440 00:44:39.715 19:37:55 -- dd/common.sh@14 -- # local bs=1048576 00:44:39.715 19:37:55 -- dd/common.sh@15 -- # local count=1 00:44:39.715 19:37:55 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:44:39.715 19:37:55 -- dd/common.sh@18 -- # gen_conf 00:44:39.715 19:37:55 -- dd/common.sh@31 -- # xtrace_disable 00:44:39.715 19:37:55 -- common/autotest_common.sh@10 -- # set +x 00:44:39.715 { 00:44:39.715 "subsystems": [ 00:44:39.715 { 00:44:39.715 "subsystem": "bdev", 00:44:39.715 "config": [ 00:44:39.715 { 00:44:39.715 "params": { 00:44:39.715 "trtype": "pcie", 00:44:39.715 "traddr": "0000:00:10.0", 00:44:39.715 "name": "Nvme0" 00:44:39.715 }, 00:44:39.715 "method": "bdev_nvme_attach_controller" 00:44:39.715 }, 00:44:39.715 { 00:44:39.715 "method": "bdev_wait_for_examine" 00:44:39.715 } 00:44:39.715 ] 00:44:39.715 } 00:44:39.715 ] 00:44:39.715 } 00:44:39.715 [2024-04-18 19:37:55.423397] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:39.715 [2024-04-18 19:37:55.423619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145211 ] 00:44:39.715 [2024-04-18 19:37:55.597449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:39.974 [2024-04-18 19:37:55.838013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:41.923  Copying: 1024/1024 [kB] (average 1000 MBps) 00:44:41.923 00:44:41.923 19:37:57 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:44:41.923 19:37:57 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:44:41.923 19:37:57 -- dd/basic_rw.sh@23 -- # count=7 00:44:41.923 19:37:57 -- dd/basic_rw.sh@24 -- # count=7 00:44:41.923 19:37:57 -- dd/basic_rw.sh@25 -- # size=57344 00:44:41.923 19:37:57 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:44:41.923 19:37:57 -- dd/common.sh@98 -- # xtrace_disable 00:44:41.923 19:37:57 -- common/autotest_common.sh@10 -- # set +x 00:44:42.489 19:37:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:44:42.489 19:37:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:44:42.489 19:37:58 -- dd/common.sh@31 -- # xtrace_disable 00:44:42.489 19:37:58 -- common/autotest_common.sh@10 -- # set +x 00:44:42.489 { 00:44:42.489 "subsystems": [ 00:44:42.489 { 00:44:42.489 "subsystem": "bdev", 00:44:42.489 "config": [ 00:44:42.489 { 00:44:42.489 "params": { 00:44:42.489 "trtype": "pcie", 00:44:42.489 "traddr": "0000:00:10.0", 00:44:42.489 "name": "Nvme0" 00:44:42.489 }, 00:44:42.489 "method": "bdev_nvme_attach_controller" 00:44:42.489 }, 00:44:42.489 { 00:44:42.489 "method": "bdev_wait_for_examine" 00:44:42.489 } 00:44:42.489 ] 00:44:42.489 } 00:44:42.489 ] 00:44:42.489 } 00:44:42.748 [2024-04-18 19:37:58.414214] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:42.748 [2024-04-18 19:37:58.414399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145250 ] 00:44:42.748 [2024-04-18 19:37:58.578053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.007 [2024-04-18 19:37:58.804574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:44.946  Copying: 56/56 [kB] (average 27 MBps) 00:44:44.946 00:44:44.946 19:38:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:44:44.946 19:38:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:44:44.946 19:38:00 -- dd/common.sh@31 -- # xtrace_disable 00:44:44.946 19:38:00 -- common/autotest_common.sh@10 -- # set +x 00:44:44.946 { 00:44:44.946 "subsystems": [ 00:44:44.946 { 00:44:44.946 "subsystem": "bdev", 00:44:44.946 "config": [ 00:44:44.946 { 00:44:44.946 "params": { 00:44:44.946 "trtype": "pcie", 00:44:44.946 "traddr": "0000:00:10.0", 00:44:44.946 "name": "Nvme0" 00:44:44.946 }, 00:44:44.946 "method": "bdev_nvme_attach_controller" 00:44:44.946 }, 00:44:44.946 { 00:44:44.946 "method": "bdev_wait_for_examine" 00:44:44.946 } 00:44:44.946 ] 00:44:44.946 } 00:44:44.946 ] 00:44:44.946 } 00:44:44.946 [2024-04-18 19:38:00.626094] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:44.946 [2024-04-18 19:38:00.626339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145299 ] 00:44:44.946 [2024-04-18 19:38:00.807071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:45.205 [2024-04-18 19:38:01.067764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:47.146  Copying: 56/56 [kB] (average 54 MBps) 00:44:47.146 00:44:47.146 19:38:02 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:47.146 19:38:02 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:44:47.146 19:38:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:44:47.146 19:38:02 -- dd/common.sh@11 -- # local nvme_ref= 00:44:47.146 19:38:02 -- dd/common.sh@12 -- # local size=57344 00:44:47.146 19:38:02 -- dd/common.sh@14 -- # local bs=1048576 00:44:47.146 19:38:02 -- dd/common.sh@15 -- # local count=1 00:44:47.146 19:38:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:44:47.146 19:38:02 -- dd/common.sh@18 -- # gen_conf 00:44:47.146 19:38:02 -- dd/common.sh@31 -- # xtrace_disable 00:44:47.146 19:38:02 -- common/autotest_common.sh@10 -- # set +x 00:44:47.146 { 00:44:47.146 "subsystems": [ 00:44:47.146 { 00:44:47.146 "subsystem": "bdev", 00:44:47.146 "config": [ 00:44:47.146 { 00:44:47.146 "params": { 00:44:47.146 "trtype": "pcie", 00:44:47.146 "traddr": "0000:00:10.0", 00:44:47.146 "name": "Nvme0" 00:44:47.146 }, 00:44:47.146 "method": "bdev_nvme_attach_controller" 00:44:47.146 }, 00:44:47.146 { 00:44:47.146 "method": "bdev_wait_for_examine" 00:44:47.146 } 00:44:47.146 ] 00:44:47.146 } 00:44:47.146 ] 00:44:47.146 } 00:44:47.146 [2024-04-18 19:38:02.981028] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:47.146 [2024-04-18 19:38:02.981540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145331 ] 00:44:47.404 [2024-04-18 19:38:03.169974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:47.661 [2024-04-18 19:38:03.396799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:49.363  Copying: 1024/1024 [kB] (average 1000 MBps) 00:44:49.363 00:44:49.363 19:38:05 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:44:49.363 19:38:05 -- dd/basic_rw.sh@23 -- # count=7 00:44:49.363 19:38:05 -- dd/basic_rw.sh@24 -- # count=7 00:44:49.363 19:38:05 -- dd/basic_rw.sh@25 -- # size=57344 00:44:49.363 19:38:05 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:44:49.363 19:38:05 -- dd/common.sh@98 -- # xtrace_disable 00:44:49.363 19:38:05 -- common/autotest_common.sh@10 -- # set +x 00:44:49.926 19:38:05 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:44:49.926 19:38:05 -- dd/basic_rw.sh@30 -- # gen_conf 00:44:49.926 19:38:05 -- dd/common.sh@31 -- # xtrace_disable 00:44:49.926 19:38:05 -- common/autotest_common.sh@10 -- # set +x 00:44:50.185 { 00:44:50.185 "subsystems": [ 00:44:50.185 { 00:44:50.185 "subsystem": "bdev", 00:44:50.185 "config": [ 00:44:50.185 { 00:44:50.185 "params": { 00:44:50.185 "trtype": "pcie", 00:44:50.185 "traddr": "0000:00:10.0", 00:44:50.185 "name": "Nvme0" 00:44:50.185 }, 00:44:50.185 "method": "bdev_nvme_attach_controller" 00:44:50.185 }, 00:44:50.185 { 00:44:50.185 "method": "bdev_wait_for_examine" 00:44:50.185 } 00:44:50.185 ] 00:44:50.185 } 00:44:50.185 ] 00:44:50.185 } 00:44:50.185 [2024-04-18 19:38:05.860338] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:50.185 [2024-04-18 19:38:05.860509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145371 ] 00:44:50.185 [2024-04-18 19:38:06.028279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:50.443 [2024-04-18 19:38:06.264830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:52.385  Copying: 56/56 [kB] (average 54 MBps) 00:44:52.385 00:44:52.385 19:38:08 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:44:52.385 19:38:08 -- dd/basic_rw.sh@37 -- # gen_conf 00:44:52.385 19:38:08 -- dd/common.sh@31 -- # xtrace_disable 00:44:52.385 19:38:08 -- common/autotest_common.sh@10 -- # set +x 00:44:52.385 [2024-04-18 19:38:08.176981] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:52.385 { 00:44:52.385 "subsystems": [ 00:44:52.385 { 00:44:52.385 "subsystem": "bdev", 00:44:52.385 "config": [ 00:44:52.385 { 00:44:52.385 "params": { 00:44:52.385 "trtype": "pcie", 00:44:52.385 "traddr": "0000:00:10.0", 00:44:52.385 "name": "Nvme0" 00:44:52.385 }, 00:44:52.385 "method": "bdev_nvme_attach_controller" 00:44:52.385 }, 00:44:52.385 { 00:44:52.385 "method": "bdev_wait_for_examine" 00:44:52.385 } 00:44:52.385 ] 00:44:52.385 } 00:44:52.385 ] 00:44:52.385 } 00:44:52.385 [2024-04-18 19:38:08.177436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145402 ] 00:44:52.643 [2024-04-18 19:38:08.346912] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:52.900 [2024-04-18 19:38:08.628572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:54.840  Copying: 56/56 [kB] (average 54 MBps) 00:44:54.840 00:44:54.840 19:38:10 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:54.840 19:38:10 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:44:54.840 19:38:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:44:54.840 19:38:10 -- dd/common.sh@11 -- # local nvme_ref= 00:44:54.840 19:38:10 -- dd/common.sh@12 -- # local size=57344 00:44:54.840 19:38:10 -- dd/common.sh@14 -- # local bs=1048576 00:44:54.840 19:38:10 -- dd/common.sh@15 -- # local count=1 00:44:54.840 19:38:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:44:54.840 19:38:10 -- dd/common.sh@18 -- # gen_conf 00:44:54.840 19:38:10 -- dd/common.sh@31 -- # xtrace_disable 00:44:54.840 19:38:10 -- common/autotest_common.sh@10 -- # set +x 00:44:54.840 { 00:44:54.840 "subsystems": [ 00:44:54.840 { 00:44:54.840 "subsystem": "bdev", 00:44:54.840 "config": [ 00:44:54.840 { 00:44:54.840 "params": { 00:44:54.840 "trtype": "pcie", 00:44:54.840 "traddr": "0000:00:10.0", 00:44:54.840 "name": "Nvme0" 00:44:54.840 }, 00:44:54.840 "method": "bdev_nvme_attach_controller" 00:44:54.840 }, 00:44:54.840 { 00:44:54.840 "method": "bdev_wait_for_examine" 00:44:54.840 } 00:44:54.840 ] 00:44:54.840 } 00:44:54.840 ] 00:44:54.840 } 00:44:54.840 [2024-04-18 19:38:10.477261] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:54.840 [2024-04-18 19:38:10.477672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145453 ] 00:44:54.840 [2024-04-18 19:38:10.658595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:55.099 [2024-04-18 19:38:10.947308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:57.041  Copying: 1024/1024 [kB] (average 1000 MBps) 00:44:57.041 00:44:57.041 19:38:12 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:44:57.041 19:38:12 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:44:57.041 19:38:12 -- dd/basic_rw.sh@23 -- # count=3 00:44:57.041 19:38:12 -- dd/basic_rw.sh@24 -- # count=3 00:44:57.041 19:38:12 -- dd/basic_rw.sh@25 -- # size=49152 00:44:57.041 19:38:12 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:44:57.041 19:38:12 -- dd/common.sh@98 -- # xtrace_disable 00:44:57.041 19:38:12 -- common/autotest_common.sh@10 -- # set +x 00:44:57.607 19:38:13 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:44:57.607 19:38:13 -- dd/basic_rw.sh@30 -- # gen_conf 00:44:57.607 19:38:13 -- dd/common.sh@31 -- # xtrace_disable 00:44:57.607 19:38:13 -- common/autotest_common.sh@10 -- # set +x 00:44:57.607 { 00:44:57.607 "subsystems": [ 00:44:57.607 { 00:44:57.607 "subsystem": "bdev", 00:44:57.607 "config": [ 00:44:57.607 { 00:44:57.607 "params": { 00:44:57.607 "trtype": "pcie", 00:44:57.607 "traddr": "0000:00:10.0", 00:44:57.607 "name": "Nvme0" 00:44:57.607 }, 00:44:57.607 "method": "bdev_nvme_attach_controller" 00:44:57.607 }, 00:44:57.607 { 00:44:57.607 "method": "bdev_wait_for_examine" 00:44:57.607 } 00:44:57.607 ] 00:44:57.607 } 00:44:57.607 ] 00:44:57.607 } 00:44:57.607 [2024-04-18 19:38:13.438535] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:57.607 [2024-04-18 19:38:13.438955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145492 ] 00:44:57.866 [2024-04-18 19:38:13.619178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:58.126 [2024-04-18 19:38:13.854760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:59.763  Copying: 48/48 [kB] (average 46 MBps) 00:44:59.763 00:44:59.763 19:38:15 -- dd/basic_rw.sh@37 -- # gen_conf 00:44:59.763 19:38:15 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:44:59.764 19:38:15 -- dd/common.sh@31 -- # xtrace_disable 00:44:59.764 19:38:15 -- common/autotest_common.sh@10 -- # set +x 00:44:59.764 [2024-04-18 19:38:15.632909] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:44:59.764 [2024-04-18 19:38:15.633299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145531 ] 00:44:59.764 { 00:44:59.764 "subsystems": [ 00:44:59.764 { 00:44:59.764 "subsystem": "bdev", 00:44:59.764 "config": [ 00:44:59.764 { 00:44:59.764 "params": { 00:44:59.764 "trtype": "pcie", 00:44:59.764 "traddr": "0000:00:10.0", 00:44:59.764 "name": "Nvme0" 00:44:59.764 }, 00:44:59.764 "method": "bdev_nvme_attach_controller" 00:44:59.764 }, 00:44:59.764 { 00:44:59.764 "method": "bdev_wait_for_examine" 00:44:59.764 } 00:44:59.764 ] 00:44:59.764 } 00:44:59.764 ] 00:44:59.764 } 00:45:00.021 [2024-04-18 19:38:15.796881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:00.279 [2024-04-18 19:38:16.019303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:01.957  Copying: 48/48 [kB] (average 46 MBps) 00:45:01.957 00:45:01.957 19:38:17 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:01.957 19:38:17 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:45:01.957 19:38:17 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:45:01.957 19:38:17 -- dd/common.sh@11 -- # local nvme_ref= 00:45:01.957 19:38:17 -- dd/common.sh@12 -- # local size=49152 00:45:01.957 19:38:17 -- dd/common.sh@14 -- # local bs=1048576 00:45:01.957 19:38:17 -- dd/common.sh@15 -- # local count=1 00:45:01.957 19:38:17 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:45:01.957 19:38:17 -- dd/common.sh@18 -- # gen_conf 00:45:01.957 19:38:17 -- dd/common.sh@31 -- # xtrace_disable 00:45:01.957 19:38:17 -- common/autotest_common.sh@10 -- # set +x 00:45:02.215 { 00:45:02.215 "subsystems": [ 00:45:02.215 { 00:45:02.215 "subsystem": "bdev", 00:45:02.215 "config": [ 00:45:02.215 { 00:45:02.215 "params": { 00:45:02.215 "trtype": "pcie", 00:45:02.215 "traddr": "0000:00:10.0", 00:45:02.215 "name": "Nvme0" 00:45:02.215 }, 00:45:02.215 "method": "bdev_nvme_attach_controller" 00:45:02.215 }, 00:45:02.215 { 00:45:02.215 "method": "bdev_wait_for_examine" 00:45:02.215 } 00:45:02.215 ] 00:45:02.215 } 00:45:02.215 ] 00:45:02.215 } 00:45:02.215 [2024-04-18 19:38:17.951894] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:02.215 [2024-04-18 19:38:17.952928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145563 ] 00:45:02.472 [2024-04-18 19:38:18.144249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:02.472 [2024-04-18 19:38:18.373584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:04.430  Copying: 1024/1024 [kB] (average 1000 MBps) 00:45:04.430 00:45:04.430 19:38:20 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:45:04.430 19:38:20 -- dd/basic_rw.sh@23 -- # count=3 00:45:04.430 19:38:20 -- dd/basic_rw.sh@24 -- # count=3 00:45:04.430 19:38:20 -- dd/basic_rw.sh@25 -- # size=49152 00:45:04.430 19:38:20 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:45:04.430 19:38:20 -- dd/common.sh@98 -- # xtrace_disable 00:45:04.430 19:38:20 -- common/autotest_common.sh@10 -- # set +x 00:45:04.996 19:38:20 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:45:04.996 19:38:20 -- dd/basic_rw.sh@30 -- # gen_conf 00:45:04.996 19:38:20 -- dd/common.sh@31 -- # xtrace_disable 00:45:04.996 19:38:20 -- common/autotest_common.sh@10 -- # set +x 00:45:04.996 { 00:45:04.996 "subsystems": [ 00:45:04.996 { 00:45:04.996 "subsystem": "bdev", 00:45:04.996 "config": [ 00:45:04.996 { 00:45:04.996 "params": { 00:45:04.996 "trtype": "pcie", 00:45:04.996 "traddr": "0000:00:10.0", 00:45:04.996 "name": "Nvme0" 00:45:04.996 }, 00:45:04.996 "method": "bdev_nvme_attach_controller" 00:45:04.996 }, 00:45:04.996 { 00:45:04.996 "method": "bdev_wait_for_examine" 00:45:04.996 } 00:45:04.996 ] 00:45:04.996 } 00:45:04.996 ] 00:45:04.996 } 00:45:04.996 [2024-04-18 19:38:20.862199] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:04.996 [2024-04-18 19:38:20.862560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145619 ] 00:45:05.254 [2024-04-18 19:38:21.025658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:05.511 [2024-04-18 19:38:21.294376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:07.456  Copying: 48/48 [kB] (average 46 MBps) 00:45:07.456 00:45:07.456 19:38:23 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:45:07.456 19:38:23 -- dd/basic_rw.sh@37 -- # gen_conf 00:45:07.456 19:38:23 -- dd/common.sh@31 -- # xtrace_disable 00:45:07.456 19:38:23 -- common/autotest_common.sh@10 -- # set +x 00:45:07.456 { 00:45:07.456 "subsystems": [ 00:45:07.456 { 00:45:07.456 "subsystem": "bdev", 00:45:07.456 "config": [ 00:45:07.456 { 00:45:07.456 "params": { 00:45:07.456 "trtype": "pcie", 00:45:07.456 "traddr": "0000:00:10.0", 00:45:07.456 "name": "Nvme0" 00:45:07.456 }, 00:45:07.456 "method": "bdev_nvme_attach_controller" 00:45:07.456 }, 00:45:07.456 { 00:45:07.456 "method": "bdev_wait_for_examine" 00:45:07.456 } 00:45:07.456 ] 00:45:07.456 } 00:45:07.456 ] 00:45:07.456 } 00:45:07.456 [2024-04-18 19:38:23.230135] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:07.456 [2024-04-18 19:38:23.230439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145650 ] 00:45:07.715 [2024-04-18 19:38:23.394681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:07.715 [2024-04-18 19:38:23.633182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:09.655  Copying: 48/48 [kB] (average 46 MBps) 00:45:09.655 00:45:09.655 19:38:25 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:09.655 19:38:25 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:45:09.655 19:38:25 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:45:09.655 19:38:25 -- dd/common.sh@11 -- # local nvme_ref= 00:45:09.655 19:38:25 -- dd/common.sh@12 -- # local size=49152 00:45:09.655 19:38:25 -- dd/common.sh@14 -- # local bs=1048576 00:45:09.655 19:38:25 -- dd/common.sh@15 -- # local count=1 00:45:09.655 19:38:25 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:45:09.655 19:38:25 -- dd/common.sh@18 -- # gen_conf 00:45:09.655 19:38:25 -- dd/common.sh@31 -- # xtrace_disable 00:45:09.655 19:38:25 -- common/autotest_common.sh@10 -- # set +x 00:45:09.655 { 00:45:09.655 "subsystems": [ 00:45:09.655 { 00:45:09.655 "subsystem": "bdev", 00:45:09.655 "config": [ 00:45:09.655 { 00:45:09.655 "params": { 00:45:09.655 "trtype": "pcie", 00:45:09.655 "traddr": "0000:00:10.0", 00:45:09.655 "name": "Nvme0" 00:45:09.655 }, 00:45:09.655 "method": "bdev_nvme_attach_controller" 00:45:09.655 }, 00:45:09.655 { 00:45:09.655 "method": "bdev_wait_for_examine" 00:45:09.655 } 00:45:09.655 ] 00:45:09.655 } 00:45:09.655 ] 00:45:09.655 } 00:45:09.655 [2024-04-18 19:38:25.542978] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:09.655 [2024-04-18 19:38:25.543516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145679 ] 00:45:09.912 [2024-04-18 19:38:25.722919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:10.170 [2024-04-18 19:38:25.948590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:12.112  Copying: 1024/1024 [kB] (average 1000 MBps) 00:45:12.112 00:45:12.112 ************************************ 00:45:12.112 END TEST dd_rw 00:45:12.112 ************************************ 00:45:12.112 00:45:12.112 real 0m45.424s 00:45:12.112 user 0m39.392s 00:45:12.112 sys 0m4.722s 00:45:12.112 19:38:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:12.112 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:45:12.112 19:38:27 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:45:12.112 19:38:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:45:12.112 19:38:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:12.112 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:45:12.112 ************************************ 00:45:12.112 START TEST dd_rw_offset 00:45:12.112 ************************************ 00:45:12.112 19:38:27 -- common/autotest_common.sh@1111 -- # basic_offset 00:45:12.112 19:38:27 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:45:12.112 19:38:27 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:45:12.112 19:38:27 -- dd/common.sh@98 -- # xtrace_disable 00:45:12.112 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:45:12.112 19:38:27 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:45:12.112 19:38:27 -- dd/basic_rw.sh@56 -- # data=2fnpuqzx1f98pteui83mgs0c4zpxufggg75kgnh7h9qv9udld05d2dn7axd7o3j4z5hd70chc9emsj5glrvw1lh5rcqji5bwil2kzxg2k50tj04btr1ursj2qui1m2tomim4qhoirzrsyophvynmdxmhhg6ksbskrirdync3pbqwegv7vq96b8thbu6u0drdtlmd77ih9crf2tbi8fhdphh74o3rr800keh15jmsue6xfc9cwh6sbp7a8xra9zhdnq1wbzkpwcnh3ij45ao6827nuene9knilv8bdytajbhz7zht1nbebvsj4pufpnrv4icpi93iocue9527plbqlv4w4aebpsls7ovxe3sv1kpmetstu46rfvgbwcwuy5o9wu4ne0mlssk6vgaqadgd8ymrcncxp7rp1oe71jbdkh27t4sgocneej4ogi7r7nr0hi63g2bnjgvbp619gw6zle3zt204j331p0oa6098h9uo2webkf8q9zobfftdor93u9lz32qem3gxwumehw3rbx4b9arv7e1qqnyvs9vmobmv10fwpytzkey857bgo21piyhs4mn9w4nesrqppa9v9nt7lixckhvhdsnmxm5kt7cqifiucjg9cwy3fd2u12222tae48r3qqnytl7lvfdg9eqmqngmtbgyyndcobxy3nbir87f3pev6zshnyts0xfhg5y74l5jkve6ewaw213n8kxwj27fq9rtzcpqcfkyrmw3eyinsfccx6aj4hm3ob8kihfogqpj7cen8haxoqc8onxsspbj3l2bnp6j3ym4ukab8nb1pbkkqeplcco6vxvsfsqmlp6il41ncj59375ssa0evrrm2sj5qzv0oz0zf9v3rk2dc7phs0v43uylh8dc87rq7fct3n28sdceap0m75rdwp7k8ezuy3mx9ygvf30q1odpe07txf1whfldqv4lp6kr6g5hprjt3spl537lxk26or2qhssg35flrx4gghusl8j21tgc63uiwgx50s1we7muhuxtj2c2poknpye2iy9tu1c2dfttsucxcgs3dlvwjeqx4g1eoeuck4y7bjpnzisor30r5dexeaix8ta8lnq3lswbsgpjuciru1uhlliy5ockdtmkgwt5mp99jdn3yulcuache1ksb7lcm44n49inu2pzlchl1a9t8wm3orf4934yy4qnu0aajd8e1yrxjovtehr3jnue20v2rhptqqnerfb5y1kaqdysy39czttpmiko9yivc61nm3nf76ia7ojbdobulq78dkomkjqm6pmedbk7db9ya2z0deo1lzxy422h3yqpdsml8wlqq7kymk0sg3afwr057wb3smojm8zpw3idc5adlung7vurkatn6invjgn9c7rnqkb7kyd9ss44k6uv5m78blzy515h2shl8pzjpvswoa2c5taa8nec1nnusrndeml8edf8wq7rkkefaue4s4n9yddz2hrzvrmy4y3ugnywwdzjdnxsmv9lw2pgptgh9lp1j3rfmf15bs6v3fo467mnghvgvk0qg7j0pkpes8prx1yr23h222ypxemjqmwicjvub9yaelx65tx19b5m3r1rlx0hs6cotlajmj859dc76p3516l0kvvl9jfet6bt5opwo519gwq6ewszzxek9czrk1hdr6ip0s4779ab207t6we3s77zwaf4ntaxvr0e2kh8vgqnv2kspdridcub4op2nxgajpip9afvedegatolhx6egtil8u68z0scg85lidcf6arkv1k8aaknw4d1cx8wd321zcqvjqfmvap51ox8hzpv4acgpsqcgjedybnd28b9m0104g5yomjrtt8hdf7pcalspyoc7xrayq699cvt3z9xf0wy28m9s49plgda6y8viwe1kr4xjo1n2bslqmzxg3s780hn919yo4f1d83a9bkztrr5102nj0d2h5q2fhaqzw8xz07qlcm6rskly7h6ptuoh1xup8xgjiyzobxlxvwwwwp4dayhgg66pqrv30954ckwy79msl2avagi2tlvhnomkmkwv5qn8k2wyrro2j12qec1fdzz5i4hfjm9jaf6xk5ouww5561j3cyb6omizjg6j140l7whvz92m1yzwdqr3xbt03y1zh8pdln21jcze55mfg7y0fz9xz7kr2e4l491v902q7fymjhh4l4ft1i2061ubrar51gabq6bugyusvo8vshmxjndc8pyja6c9bpwhaq48kxzjhryjc0asbf216co1yqsi7ln2ld8bc19he6pg40n1bk6ekicbs8exeid0c0tom2bp5k4ff15cmvdluzxa2jzkk6s1ev219j2y9pwszz993aeb8hsv5h7f13qsywhtpe6bjx7thi1j9c1iw3x2hendzi1gd1lgo0vloozr9rv5a34q8z2riwjv79gi3gsx4c3fo5bi3r312agyqwqhgl2o6rv3fdt5y8vtyciww7pu16tryprprvsnzhmikcwox4m2pjpu0lk9xd98t8k8anzwvjd72612gtlm0zrf7vbw6h5wqpya349n9763ralqtilyss03lelyv3d5u71t9g7db0n92h08x91ygeivid0zaa91fovu61letoxduwro4mlj6i3c8955rvip06j0lr8wttzio3alxl4de1ze6mvnwe1tu6ygi426s2z2ejog7jrjc3qipytbkuw9tbpsvsbd708vkmr4n5fhj564jbq6m5d21azlfcli2uzpimsz7zi5jiwdqihaar7nipmacg1afvw9nhmcicf1a931j3j0hwgfwfbbome0ytcbedyobtgo6c9fudn21slel2y88n3sdeebrztpbenj8tu4tbsyakjq097kungceoqyrevb33p4nvt4gc1scx2iivllcd92xx1cqj3vzjqe5jex58v0mmpp2j591qkgxpwrbdte3ueuvhmwzbkueil9z09f54vw6wyv0hkgzcjml0xbt515mfs85l47nqyki38agst1zub9snosp14cegqsaqpb4fl3b35a6m00lorvxx049kmjm8w0bovmq8uw1letabk3me5malfqtneg90ulgszcx65hjckgz2vs7uui5jje5r1sf88d8bq69sojexbw50crcrs76i2rtx4pj8pgkvz6z83bofwys1v0pwtwzbcjbm98qbxrni1izmud07yy6nxw82onna4s4jszlhp2al9zl2iomwfesd9fno0pqzhuf0xfbwkxfbielprs24mghh3k838eduzwisehx3gxsxbt0esd8xk5c6z8q8s9smp4f779rnjgl107f6xpwd76u6z4b8xdls4tckxjhpctdgv2ojk3qun8hx3ot7ygbnm0bmfpo0zip2q95e1e87gymndhqp0mh0ai683zn3igbs94c3dx3egttnao6j8qzy15eesft8evf9o82l3pqs84tvraoy0zgbmuwykqjzzedcp0nxb1t8pnou088ddru4mol2hcd0k4h2ghk2bl9vfzmk0izllbyw6yqlumgm9g7h4apanz63mvqc7d4swbqrnrgkpnb2z71jgr0jnde4ywznzv1wdakck3pojo9v765hf0dsprf4be3hzmmsvidvazufuhjmby1qe9di2pua7eu9gkp1kzlwdeq4rf4kz162t019oocma53rsbp1hiw65gew8kdim9ctj90wmihmuj9s37v0f84kt8d4px7zqmkswhtjkgt06antl2a8nmsxtux2bznd9f395y448581h8jt1d0o4d68d6s2vgdzz6clbqotewy8fyycn5einomd1svyld4a7868h6f38mqtny4l7nt1qxwvssw8x3dy54dzsocw0xrs86px9nktbgj3hf02vnxvhrz6snnpx3fjs68i2skrsqgjukanqqeqghknvp87qeaqybv4csq40ta9m4w1o5do0jomcankphg9vzhopqctr8x92q14ly21c3bdjoqrk4lv4fn3zaajnzezeif9jkt9cur3miw57eat7s70gqc5vxgl7vqhd0r1if2xh2cdlf39dz962t93kgpsd2j6u 00:45:12.112 19:38:27 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:45:12.112 19:38:27 -- dd/basic_rw.sh@59 -- # gen_conf 00:45:12.112 19:38:27 -- dd/common.sh@31 -- # xtrace_disable 00:45:12.112 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:45:12.112 { 00:45:12.112 "subsystems": [ 00:45:12.112 { 00:45:12.112 "subsystem": "bdev", 00:45:12.112 "config": [ 00:45:12.112 { 00:45:12.112 "params": { 00:45:12.112 "trtype": "pcie", 00:45:12.112 "traddr": "0000:00:10.0", 00:45:12.112 "name": "Nvme0" 00:45:12.112 }, 00:45:12.112 "method": "bdev_nvme_attach_controller" 00:45:12.112 }, 00:45:12.112 { 00:45:12.112 "method": "bdev_wait_for_examine" 00:45:12.112 } 00:45:12.112 ] 00:45:12.112 } 00:45:12.112 ] 00:45:12.112 } 00:45:12.112 [2024-04-18 19:38:28.017904] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:12.112 [2024-04-18 19:38:28.018103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145743 ] 00:45:12.370 [2024-04-18 19:38:28.199288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:12.652 [2024-04-18 19:38:28.410804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:14.285  Copying: 4096/4096 [B] (average 4000 kBps) 00:45:14.285 00:45:14.285 19:38:30 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:45:14.285 19:38:30 -- dd/basic_rw.sh@65 -- # gen_conf 00:45:14.285 19:38:30 -- dd/common.sh@31 -- # xtrace_disable 00:45:14.285 19:38:30 -- common/autotest_common.sh@10 -- # set +x 00:45:14.543 { 00:45:14.544 "subsystems": [ 00:45:14.544 { 00:45:14.544 "subsystem": "bdev", 00:45:14.544 "config": [ 00:45:14.544 { 00:45:14.544 "params": { 00:45:14.544 "trtype": "pcie", 00:45:14.544 "traddr": "0000:00:10.0", 00:45:14.544 "name": "Nvme0" 00:45:14.544 }, 00:45:14.544 "method": "bdev_nvme_attach_controller" 00:45:14.544 }, 00:45:14.544 { 00:45:14.544 "method": "bdev_wait_for_examine" 00:45:14.544 } 00:45:14.544 ] 00:45:14.544 } 00:45:14.544 ] 00:45:14.544 } 00:45:14.544 [2024-04-18 19:38:30.262629] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:14.544 [2024-04-18 19:38:30.263284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145780 ] 00:45:14.544 [2024-04-18 19:38:30.428421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:15.110 [2024-04-18 19:38:30.727892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:16.743  Copying: 4096/4096 [B] (average 4000 kBps) 00:45:16.743 00:45:16.743 19:38:32 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:45:16.743 19:38:32 -- dd/basic_rw.sh@72 -- # [[ 2fnpuqzx1f98pteui83mgs0c4zpxufggg75kgnh7h9qv9udld05d2dn7axd7o3j4z5hd70chc9emsj5glrvw1lh5rcqji5bwil2kzxg2k50tj04btr1ursj2qui1m2tomim4qhoirzrsyophvynmdxmhhg6ksbskrirdync3pbqwegv7vq96b8thbu6u0drdtlmd77ih9crf2tbi8fhdphh74o3rr800keh15jmsue6xfc9cwh6sbp7a8xra9zhdnq1wbzkpwcnh3ij45ao6827nuene9knilv8bdytajbhz7zht1nbebvsj4pufpnrv4icpi93iocue9527plbqlv4w4aebpsls7ovxe3sv1kpmetstu46rfvgbwcwuy5o9wu4ne0mlssk6vgaqadgd8ymrcncxp7rp1oe71jbdkh27t4sgocneej4ogi7r7nr0hi63g2bnjgvbp619gw6zle3zt204j331p0oa6098h9uo2webkf8q9zobfftdor93u9lz32qem3gxwumehw3rbx4b9arv7e1qqnyvs9vmobmv10fwpytzkey857bgo21piyhs4mn9w4nesrqppa9v9nt7lixckhvhdsnmxm5kt7cqifiucjg9cwy3fd2u12222tae48r3qqnytl7lvfdg9eqmqngmtbgyyndcobxy3nbir87f3pev6zshnyts0xfhg5y74l5jkve6ewaw213n8kxwj27fq9rtzcpqcfkyrmw3eyinsfccx6aj4hm3ob8kihfogqpj7cen8haxoqc8onxsspbj3l2bnp6j3ym4ukab8nb1pbkkqeplcco6vxvsfsqmlp6il41ncj59375ssa0evrrm2sj5qzv0oz0zf9v3rk2dc7phs0v43uylh8dc87rq7fct3n28sdceap0m75rdwp7k8ezuy3mx9ygvf30q1odpe07txf1whfldqv4lp6kr6g5hprjt3spl537lxk26or2qhssg35flrx4gghusl8j21tgc63uiwgx50s1we7muhuxtj2c2poknpye2iy9tu1c2dfttsucxcgs3dlvwjeqx4g1eoeuck4y7bjpnzisor30r5dexeaix8ta8lnq3lswbsgpjuciru1uhlliy5ockdtmkgwt5mp99jdn3yulcuache1ksb7lcm44n49inu2pzlchl1a9t8wm3orf4934yy4qnu0aajd8e1yrxjovtehr3jnue20v2rhptqqnerfb5y1kaqdysy39czttpmiko9yivc61nm3nf76ia7ojbdobulq78dkomkjqm6pmedbk7db9ya2z0deo1lzxy422h3yqpdsml8wlqq7kymk0sg3afwr057wb3smojm8zpw3idc5adlung7vurkatn6invjgn9c7rnqkb7kyd9ss44k6uv5m78blzy515h2shl8pzjpvswoa2c5taa8nec1nnusrndeml8edf8wq7rkkefaue4s4n9yddz2hrzvrmy4y3ugnywwdzjdnxsmv9lw2pgptgh9lp1j3rfmf15bs6v3fo467mnghvgvk0qg7j0pkpes8prx1yr23h222ypxemjqmwicjvub9yaelx65tx19b5m3r1rlx0hs6cotlajmj859dc76p3516l0kvvl9jfet6bt5opwo519gwq6ewszzxek9czrk1hdr6ip0s4779ab207t6we3s77zwaf4ntaxvr0e2kh8vgqnv2kspdridcub4op2nxgajpip9afvedegatolhx6egtil8u68z0scg85lidcf6arkv1k8aaknw4d1cx8wd321zcqvjqfmvap51ox8hzpv4acgpsqcgjedybnd28b9m0104g5yomjrtt8hdf7pcalspyoc7xrayq699cvt3z9xf0wy28m9s49plgda6y8viwe1kr4xjo1n2bslqmzxg3s780hn919yo4f1d83a9bkztrr5102nj0d2h5q2fhaqzw8xz07qlcm6rskly7h6ptuoh1xup8xgjiyzobxlxvwwwwp4dayhgg66pqrv30954ckwy79msl2avagi2tlvhnomkmkwv5qn8k2wyrro2j12qec1fdzz5i4hfjm9jaf6xk5ouww5561j3cyb6omizjg6j140l7whvz92m1yzwdqr3xbt03y1zh8pdln21jcze55mfg7y0fz9xz7kr2e4l491v902q7fymjhh4l4ft1i2061ubrar51gabq6bugyusvo8vshmxjndc8pyja6c9bpwhaq48kxzjhryjc0asbf216co1yqsi7ln2ld8bc19he6pg40n1bk6ekicbs8exeid0c0tom2bp5k4ff15cmvdluzxa2jzkk6s1ev219j2y9pwszz993aeb8hsv5h7f13qsywhtpe6bjx7thi1j9c1iw3x2hendzi1gd1lgo0vloozr9rv5a34q8z2riwjv79gi3gsx4c3fo5bi3r312agyqwqhgl2o6rv3fdt5y8vtyciww7pu16tryprprvsnzhmikcwox4m2pjpu0lk9xd98t8k8anzwvjd72612gtlm0zrf7vbw6h5wqpya349n9763ralqtilyss03lelyv3d5u71t9g7db0n92h08x91ygeivid0zaa91fovu61letoxduwro4mlj6i3c8955rvip06j0lr8wttzio3alxl4de1ze6mvnwe1tu6ygi426s2z2ejog7jrjc3qipytbkuw9tbpsvsbd708vkmr4n5fhj564jbq6m5d21azlfcli2uzpimsz7zi5jiwdqihaar7nipmacg1afvw9nhmcicf1a931j3j0hwgfwfbbome0ytcbedyobtgo6c9fudn21slel2y88n3sdeebrztpbenj8tu4tbsyakjq097kungceoqyrevb33p4nvt4gc1scx2iivllcd92xx1cqj3vzjqe5jex58v0mmpp2j591qkgxpwrbdte3ueuvhmwzbkueil9z09f54vw6wyv0hkgzcjml0xbt515mfs85l47nqyki38agst1zub9snosp14cegqsaqpb4fl3b35a6m00lorvxx049kmjm8w0bovmq8uw1letabk3me5malfqtneg90ulgszcx65hjckgz2vs7uui5jje5r1sf88d8bq69sojexbw50crcrs76i2rtx4pj8pgkvz6z83bofwys1v0pwtwzbcjbm98qbxrni1izmud07yy6nxw82onna4s4jszlhp2al9zl2iomwfesd9fno0pqzhuf0xfbwkxfbielprs24mghh3k838eduzwisehx3gxsxbt0esd8xk5c6z8q8s9smp4f779rnjgl107f6xpwd76u6z4b8xdls4tckxjhpctdgv2ojk3qun8hx3ot7ygbnm0bmfpo0zip2q95e1e87gymndhqp0mh0ai683zn3igbs94c3dx3egttnao6j8qzy15eesft8evf9o82l3pqs84tvraoy0zgbmuwykqjzzedcp0nxb1t8pnou088ddru4mol2hcd0k4h2ghk2bl9vfzmk0izllbyw6yqlumgm9g7h4apanz63mvqc7d4swbqrnrgkpnb2z71jgr0jnde4ywznzv1wdakck3pojo9v765hf0dsprf4be3hzmmsvidvazufuhjmby1qe9di2pua7eu9gkp1kzlwdeq4rf4kz162t019oocma53rsbp1hiw65gew8kdim9ctj90wmihmuj9s37v0f84kt8d4px7zqmkswhtjkgt06antl2a8nmsxtux2bznd9f395y448581h8jt1d0o4d68d6s2vgdzz6clbqotewy8fyycn5einomd1svyld4a7868h6f38mqtny4l7nt1qxwvssw8x3dy54dzsocw0xrs86px9nktbgj3hf02vnxvhrz6snnpx3fjs68i2skrsqgjukanqqeqghknvp87qeaqybv4csq40ta9m4w1o5do0jomcankphg9vzhopqctr8x92q14ly21c3bdjoqrk4lv4fn3zaajnzezeif9jkt9cur3miw57eat7s70gqc5vxgl7vqhd0r1if2xh2cdlf39dz962t93kgpsd2j6u == \2\f\n\p\u\q\z\x\1\f\9\8\p\t\e\u\i\8\3\m\g\s\0\c\4\z\p\x\u\f\g\g\g\7\5\k\g\n\h\7\h\9\q\v\9\u\d\l\d\0\5\d\2\d\n\7\a\x\d\7\o\3\j\4\z\5\h\d\7\0\c\h\c\9\e\m\s\j\5\g\l\r\v\w\1\l\h\5\r\c\q\j\i\5\b\w\i\l\2\k\z\x\g\2\k\5\0\t\j\0\4\b\t\r\1\u\r\s\j\2\q\u\i\1\m\2\t\o\m\i\m\4\q\h\o\i\r\z\r\s\y\o\p\h\v\y\n\m\d\x\m\h\h\g\6\k\s\b\s\k\r\i\r\d\y\n\c\3\p\b\q\w\e\g\v\7\v\q\9\6\b\8\t\h\b\u\6\u\0\d\r\d\t\l\m\d\7\7\i\h\9\c\r\f\2\t\b\i\8\f\h\d\p\h\h\7\4\o\3\r\r\8\0\0\k\e\h\1\5\j\m\s\u\e\6\x\f\c\9\c\w\h\6\s\b\p\7\a\8\x\r\a\9\z\h\d\n\q\1\w\b\z\k\p\w\c\n\h\3\i\j\4\5\a\o\6\8\2\7\n\u\e\n\e\9\k\n\i\l\v\8\b\d\y\t\a\j\b\h\z\7\z\h\t\1\n\b\e\b\v\s\j\4\p\u\f\p\n\r\v\4\i\c\p\i\9\3\i\o\c\u\e\9\5\2\7\p\l\b\q\l\v\4\w\4\a\e\b\p\s\l\s\7\o\v\x\e\3\s\v\1\k\p\m\e\t\s\t\u\4\6\r\f\v\g\b\w\c\w\u\y\5\o\9\w\u\4\n\e\0\m\l\s\s\k\6\v\g\a\q\a\d\g\d\8\y\m\r\c\n\c\x\p\7\r\p\1\o\e\7\1\j\b\d\k\h\2\7\t\4\s\g\o\c\n\e\e\j\4\o\g\i\7\r\7\n\r\0\h\i\6\3\g\2\b\n\j\g\v\b\p\6\1\9\g\w\6\z\l\e\3\z\t\2\0\4\j\3\3\1\p\0\o\a\6\0\9\8\h\9\u\o\2\w\e\b\k\f\8\q\9\z\o\b\f\f\t\d\o\r\9\3\u\9\l\z\3\2\q\e\m\3\g\x\w\u\m\e\h\w\3\r\b\x\4\b\9\a\r\v\7\e\1\q\q\n\y\v\s\9\v\m\o\b\m\v\1\0\f\w\p\y\t\z\k\e\y\8\5\7\b\g\o\2\1\p\i\y\h\s\4\m\n\9\w\4\n\e\s\r\q\p\p\a\9\v\9\n\t\7\l\i\x\c\k\h\v\h\d\s\n\m\x\m\5\k\t\7\c\q\i\f\i\u\c\j\g\9\c\w\y\3\f\d\2\u\1\2\2\2\2\t\a\e\4\8\r\3\q\q\n\y\t\l\7\l\v\f\d\g\9\e\q\m\q\n\g\m\t\b\g\y\y\n\d\c\o\b\x\y\3\n\b\i\r\8\7\f\3\p\e\v\6\z\s\h\n\y\t\s\0\x\f\h\g\5\y\7\4\l\5\j\k\v\e\6\e\w\a\w\2\1\3\n\8\k\x\w\j\2\7\f\q\9\r\t\z\c\p\q\c\f\k\y\r\m\w\3\e\y\i\n\s\f\c\c\x\6\a\j\4\h\m\3\o\b\8\k\i\h\f\o\g\q\p\j\7\c\e\n\8\h\a\x\o\q\c\8\o\n\x\s\s\p\b\j\3\l\2\b\n\p\6\j\3\y\m\4\u\k\a\b\8\n\b\1\p\b\k\k\q\e\p\l\c\c\o\6\v\x\v\s\f\s\q\m\l\p\6\i\l\4\1\n\c\j\5\9\3\7\5\s\s\a\0\e\v\r\r\m\2\s\j\5\q\z\v\0\o\z\0\z\f\9\v\3\r\k\2\d\c\7\p\h\s\0\v\4\3\u\y\l\h\8\d\c\8\7\r\q\7\f\c\t\3\n\2\8\s\d\c\e\a\p\0\m\7\5\r\d\w\p\7\k\8\e\z\u\y\3\m\x\9\y\g\v\f\3\0\q\1\o\d\p\e\0\7\t\x\f\1\w\h\f\l\d\q\v\4\l\p\6\k\r\6\g\5\h\p\r\j\t\3\s\p\l\5\3\7\l\x\k\2\6\o\r\2\q\h\s\s\g\3\5\f\l\r\x\4\g\g\h\u\s\l\8\j\2\1\t\g\c\6\3\u\i\w\g\x\5\0\s\1\w\e\7\m\u\h\u\x\t\j\2\c\2\p\o\k\n\p\y\e\2\i\y\9\t\u\1\c\2\d\f\t\t\s\u\c\x\c\g\s\3\d\l\v\w\j\e\q\x\4\g\1\e\o\e\u\c\k\4\y\7\b\j\p\n\z\i\s\o\r\3\0\r\5\d\e\x\e\a\i\x\8\t\a\8\l\n\q\3\l\s\w\b\s\g\p\j\u\c\i\r\u\1\u\h\l\l\i\y\5\o\c\k\d\t\m\k\g\w\t\5\m\p\9\9\j\d\n\3\y\u\l\c\u\a\c\h\e\1\k\s\b\7\l\c\m\4\4\n\4\9\i\n\u\2\p\z\l\c\h\l\1\a\9\t\8\w\m\3\o\r\f\4\9\3\4\y\y\4\q\n\u\0\a\a\j\d\8\e\1\y\r\x\j\o\v\t\e\h\r\3\j\n\u\e\2\0\v\2\r\h\p\t\q\q\n\e\r\f\b\5\y\1\k\a\q\d\y\s\y\3\9\c\z\t\t\p\m\i\k\o\9\y\i\v\c\6\1\n\m\3\n\f\7\6\i\a\7\o\j\b\d\o\b\u\l\q\7\8\d\k\o\m\k\j\q\m\6\p\m\e\d\b\k\7\d\b\9\y\a\2\z\0\d\e\o\1\l\z\x\y\4\2\2\h\3\y\q\p\d\s\m\l\8\w\l\q\q\7\k\y\m\k\0\s\g\3\a\f\w\r\0\5\7\w\b\3\s\m\o\j\m\8\z\p\w\3\i\d\c\5\a\d\l\u\n\g\7\v\u\r\k\a\t\n\6\i\n\v\j\g\n\9\c\7\r\n\q\k\b\7\k\y\d\9\s\s\4\4\k\6\u\v\5\m\7\8\b\l\z\y\5\1\5\h\2\s\h\l\8\p\z\j\p\v\s\w\o\a\2\c\5\t\a\a\8\n\e\c\1\n\n\u\s\r\n\d\e\m\l\8\e\d\f\8\w\q\7\r\k\k\e\f\a\u\e\4\s\4\n\9\y\d\d\z\2\h\r\z\v\r\m\y\4\y\3\u\g\n\y\w\w\d\z\j\d\n\x\s\m\v\9\l\w\2\p\g\p\t\g\h\9\l\p\1\j\3\r\f\m\f\1\5\b\s\6\v\3\f\o\4\6\7\m\n\g\h\v\g\v\k\0\q\g\7\j\0\p\k\p\e\s\8\p\r\x\1\y\r\2\3\h\2\2\2\y\p\x\e\m\j\q\m\w\i\c\j\v\u\b\9\y\a\e\l\x\6\5\t\x\1\9\b\5\m\3\r\1\r\l\x\0\h\s\6\c\o\t\l\a\j\m\j\8\5\9\d\c\7\6\p\3\5\1\6\l\0\k\v\v\l\9\j\f\e\t\6\b\t\5\o\p\w\o\5\1\9\g\w\q\6\e\w\s\z\z\x\e\k\9\c\z\r\k\1\h\d\r\6\i\p\0\s\4\7\7\9\a\b\2\0\7\t\6\w\e\3\s\7\7\z\w\a\f\4\n\t\a\x\v\r\0\e\2\k\h\8\v\g\q\n\v\2\k\s\p\d\r\i\d\c\u\b\4\o\p\2\n\x\g\a\j\p\i\p\9\a\f\v\e\d\e\g\a\t\o\l\h\x\6\e\g\t\i\l\8\u\6\8\z\0\s\c\g\8\5\l\i\d\c\f\6\a\r\k\v\1\k\8\a\a\k\n\w\4\d\1\c\x\8\w\d\3\2\1\z\c\q\v\j\q\f\m\v\a\p\5\1\o\x\8\h\z\p\v\4\a\c\g\p\s\q\c\g\j\e\d\y\b\n\d\2\8\b\9\m\0\1\0\4\g\5\y\o\m\j\r\t\t\8\h\d\f\7\p\c\a\l\s\p\y\o\c\7\x\r\a\y\q\6\9\9\c\v\t\3\z\9\x\f\0\w\y\2\8\m\9\s\4\9\p\l\g\d\a\6\y\8\v\i\w\e\1\k\r\4\x\j\o\1\n\2\b\s\l\q\m\z\x\g\3\s\7\8\0\h\n\9\1\9\y\o\4\f\1\d\8\3\a\9\b\k\z\t\r\r\5\1\0\2\n\j\0\d\2\h\5\q\2\f\h\a\q\z\w\8\x\z\0\7\q\l\c\m\6\r\s\k\l\y\7\h\6\p\t\u\o\h\1\x\u\p\8\x\g\j\i\y\z\o\b\x\l\x\v\w\w\w\w\p\4\d\a\y\h\g\g\6\6\p\q\r\v\3\0\9\5\4\c\k\w\y\7\9\m\s\l\2\a\v\a\g\i\2\t\l\v\h\n\o\m\k\m\k\w\v\5\q\n\8\k\2\w\y\r\r\o\2\j\1\2\q\e\c\1\f\d\z\z\5\i\4\h\f\j\m\9\j\a\f\6\x\k\5\o\u\w\w\5\5\6\1\j\3\c\y\b\6\o\m\i\z\j\g\6\j\1\4\0\l\7\w\h\v\z\9\2\m\1\y\z\w\d\q\r\3\x\b\t\0\3\y\1\z\h\8\p\d\l\n\2\1\j\c\z\e\5\5\m\f\g\7\y\0\f\z\9\x\z\7\k\r\2\e\4\l\4\9\1\v\9\0\2\q\7\f\y\m\j\h\h\4\l\4\f\t\1\i\2\0\6\1\u\b\r\a\r\5\1\g\a\b\q\6\b\u\g\y\u\s\v\o\8\v\s\h\m\x\j\n\d\c\8\p\y\j\a\6\c\9\b\p\w\h\a\q\4\8\k\x\z\j\h\r\y\j\c\0\a\s\b\f\2\1\6\c\o\1\y\q\s\i\7\l\n\2\l\d\8\b\c\1\9\h\e\6\p\g\4\0\n\1\b\k\6\e\k\i\c\b\s\8\e\x\e\i\d\0\c\0\t\o\m\2\b\p\5\k\4\f\f\1\5\c\m\v\d\l\u\z\x\a\2\j\z\k\k\6\s\1\e\v\2\1\9\j\2\y\9\p\w\s\z\z\9\9\3\a\e\b\8\h\s\v\5\h\7\f\1\3\q\s\y\w\h\t\p\e\6\b\j\x\7\t\h\i\1\j\9\c\1\i\w\3\x\2\h\e\n\d\z\i\1\g\d\1\l\g\o\0\v\l\o\o\z\r\9\r\v\5\a\3\4\q\8\z\2\r\i\w\j\v\7\9\g\i\3\g\s\x\4\c\3\f\o\5\b\i\3\r\3\1\2\a\g\y\q\w\q\h\g\l\2\o\6\r\v\3\f\d\t\5\y\8\v\t\y\c\i\w\w\7\p\u\1\6\t\r\y\p\r\p\r\v\s\n\z\h\m\i\k\c\w\o\x\4\m\2\p\j\p\u\0\l\k\9\x\d\9\8\t\8\k\8\a\n\z\w\v\j\d\7\2\6\1\2\g\t\l\m\0\z\r\f\7\v\b\w\6\h\5\w\q\p\y\a\3\4\9\n\9\7\6\3\r\a\l\q\t\i\l\y\s\s\0\3\l\e\l\y\v\3\d\5\u\7\1\t\9\g\7\d\b\0\n\9\2\h\0\8\x\9\1\y\g\e\i\v\i\d\0\z\a\a\9\1\f\o\v\u\6\1\l\e\t\o\x\d\u\w\r\o\4\m\l\j\6\i\3\c\8\9\5\5\r\v\i\p\0\6\j\0\l\r\8\w\t\t\z\i\o\3\a\l\x\l\4\d\e\1\z\e\6\m\v\n\w\e\1\t\u\6\y\g\i\4\2\6\s\2\z\2\e\j\o\g\7\j\r\j\c\3\q\i\p\y\t\b\k\u\w\9\t\b\p\s\v\s\b\d\7\0\8\v\k\m\r\4\n\5\f\h\j\5\6\4\j\b\q\6\m\5\d\2\1\a\z\l\f\c\l\i\2\u\z\p\i\m\s\z\7\z\i\5\j\i\w\d\q\i\h\a\a\r\7\n\i\p\m\a\c\g\1\a\f\v\w\9\n\h\m\c\i\c\f\1\a\9\3\1\j\3\j\0\h\w\g\f\w\f\b\b\o\m\e\0\y\t\c\b\e\d\y\o\b\t\g\o\6\c\9\f\u\d\n\2\1\s\l\e\l\2\y\8\8\n\3\s\d\e\e\b\r\z\t\p\b\e\n\j\8\t\u\4\t\b\s\y\a\k\j\q\0\9\7\k\u\n\g\c\e\o\q\y\r\e\v\b\3\3\p\4\n\v\t\4\g\c\1\s\c\x\2\i\i\v\l\l\c\d\9\2\x\x\1\c\q\j\3\v\z\j\q\e\5\j\e\x\5\8\v\0\m\m\p\p\2\j\5\9\1\q\k\g\x\p\w\r\b\d\t\e\3\u\e\u\v\h\m\w\z\b\k\u\e\i\l\9\z\0\9\f\5\4\v\w\6\w\y\v\0\h\k\g\z\c\j\m\l\0\x\b\t\5\1\5\m\f\s\8\5\l\4\7\n\q\y\k\i\3\8\a\g\s\t\1\z\u\b\9\s\n\o\s\p\1\4\c\e\g\q\s\a\q\p\b\4\f\l\3\b\3\5\a\6\m\0\0\l\o\r\v\x\x\0\4\9\k\m\j\m\8\w\0\b\o\v\m\q\8\u\w\1\l\e\t\a\b\k\3\m\e\5\m\a\l\f\q\t\n\e\g\9\0\u\l\g\s\z\c\x\6\5\h\j\c\k\g\z\2\v\s\7\u\u\i\5\j\j\e\5\r\1\s\f\8\8\d\8\b\q\6\9\s\o\j\e\x\b\w\5\0\c\r\c\r\s\7\6\i\2\r\t\x\4\p\j\8\p\g\k\v\z\6\z\8\3\b\o\f\w\y\s\1\v\0\p\w\t\w\z\b\c\j\b\m\9\8\q\b\x\r\n\i\1\i\z\m\u\d\0\7\y\y\6\n\x\w\8\2\o\n\n\a\4\s\4\j\s\z\l\h\p\2\a\l\9\z\l\2\i\o\m\w\f\e\s\d\9\f\n\o\0\p\q\z\h\u\f\0\x\f\b\w\k\x\f\b\i\e\l\p\r\s\2\4\m\g\h\h\3\k\8\3\8\e\d\u\z\w\i\s\e\h\x\3\g\x\s\x\b\t\0\e\s\d\8\x\k\5\c\6\z\8\q\8\s\9\s\m\p\4\f\7\7\9\r\n\j\g\l\1\0\7\f\6\x\p\w\d\7\6\u\6\z\4\b\8\x\d\l\s\4\t\c\k\x\j\h\p\c\t\d\g\v\2\o\j\k\3\q\u\n\8\h\x\3\o\t\7\y\g\b\n\m\0\b\m\f\p\o\0\z\i\p\2\q\9\5\e\1\e\8\7\g\y\m\n\d\h\q\p\0\m\h\0\a\i\6\8\3\z\n\3\i\g\b\s\9\4\c\3\d\x\3\e\g\t\t\n\a\o\6\j\8\q\z\y\1\5\e\e\s\f\t\8\e\v\f\9\o\8\2\l\3\p\q\s\8\4\t\v\r\a\o\y\0\z\g\b\m\u\w\y\k\q\j\z\z\e\d\c\p\0\n\x\b\1\t\8\p\n\o\u\0\8\8\d\d\r\u\4\m\o\l\2\h\c\d\0\k\4\h\2\g\h\k\2\b\l\9\v\f\z\m\k\0\i\z\l\l\b\y\w\6\y\q\l\u\m\g\m\9\g\7\h\4\a\p\a\n\z\6\3\m\v\q\c\7\d\4\s\w\b\q\r\n\r\g\k\p\n\b\2\z\7\1\j\g\r\0\j\n\d\e\4\y\w\z\n\z\v\1\w\d\a\k\c\k\3\p\o\j\o\9\v\7\6\5\h\f\0\d\s\p\r\f\4\b\e\3\h\z\m\m\s\v\i\d\v\a\z\u\f\u\h\j\m\b\y\1\q\e\9\d\i\2\p\u\a\7\e\u\9\g\k\p\1\k\z\l\w\d\e\q\4\r\f\4\k\z\1\6\2\t\0\1\9\o\o\c\m\a\5\3\r\s\b\p\1\h\i\w\6\5\g\e\w\8\k\d\i\m\9\c\t\j\9\0\w\m\i\h\m\u\j\9\s\3\7\v\0\f\8\4\k\t\8\d\4\p\x\7\z\q\m\k\s\w\h\t\j\k\g\t\0\6\a\n\t\l\2\a\8\n\m\s\x\t\u\x\2\b\z\n\d\9\f\3\9\5\y\4\4\8\5\8\1\h\8\j\t\1\d\0\o\4\d\6\8\d\6\s\2\v\g\d\z\z\6\c\l\b\q\o\t\e\w\y\8\f\y\y\c\n\5\e\i\n\o\m\d\1\s\v\y\l\d\4\a\7\8\6\8\h\6\f\3\8\m\q\t\n\y\4\l\7\n\t\1\q\x\w\v\s\s\w\8\x\3\d\y\5\4\d\z\s\o\c\w\0\x\r\s\8\6\p\x\9\n\k\t\b\g\j\3\h\f\0\2\v\n\x\v\h\r\z\6\s\n\n\p\x\3\f\j\s\6\8\i\2\s\k\r\s\q\g\j\u\k\a\n\q\q\e\q\g\h\k\n\v\p\8\7\q\e\a\q\y\b\v\4\c\s\q\4\0\t\a\9\m\4\w\1\o\5\d\o\0\j\o\m\c\a\n\k\p\h\g\9\v\z\h\o\p\q\c\t\r\8\x\9\2\q\1\4\l\y\2\1\c\3\b\d\j\o\q\r\k\4\l\v\4\f\n\3\z\a\a\j\n\z\e\z\e\i\f\9\j\k\t\9\c\u\r\3\m\i\w\5\7\e\a\t\7\s\7\0\g\q\c\5\v\x\g\l\7\v\q\h\d\0\r\1\i\f\2\x\h\2\c\d\l\f\3\9\d\z\9\6\2\t\9\3\k\g\p\s\d\2\j\6\u ]] 00:45:16.743 00:45:16.743 real 0m4.722s 00:45:16.743 user 0m4.053s 00:45:16.743 sys 0m0.522s 00:45:16.743 19:38:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:16.743 ************************************ 00:45:16.743 END TEST dd_rw_offset 00:45:16.743 ************************************ 00:45:16.743 19:38:32 -- common/autotest_common.sh@10 -- # set +x 00:45:16.743 19:38:32 -- dd/basic_rw.sh@1 -- # cleanup 00:45:16.743 19:38:32 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:45:16.743 19:38:32 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:45:16.743 19:38:32 -- dd/common.sh@11 -- # local nvme_ref= 00:45:16.743 19:38:32 -- dd/common.sh@12 -- # local size=0xffff 00:45:16.743 19:38:32 -- dd/common.sh@14 -- # local bs=1048576 00:45:16.743 19:38:32 -- dd/common.sh@15 -- # local count=1 00:45:16.743 19:38:32 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:45:16.743 19:38:32 -- dd/common.sh@18 -- # gen_conf 00:45:16.743 19:38:32 -- dd/common.sh@31 -- # xtrace_disable 00:45:16.743 19:38:32 -- common/autotest_common.sh@10 -- # set +x 00:45:17.002 { 00:45:17.002 "subsystems": [ 00:45:17.002 { 00:45:17.002 "subsystem": "bdev", 00:45:17.002 "config": [ 00:45:17.002 { 00:45:17.002 "params": { 00:45:17.002 "trtype": "pcie", 00:45:17.002 "traddr": "0000:00:10.0", 00:45:17.002 "name": "Nvme0" 00:45:17.002 }, 00:45:17.002 "method": "bdev_nvme_attach_controller" 00:45:17.002 }, 00:45:17.002 { 00:45:17.002 "method": "bdev_wait_for_examine" 00:45:17.002 } 00:45:17.002 ] 00:45:17.002 } 00:45:17.002 ] 00:45:17.002 } 00:45:17.002 [2024-04-18 19:38:32.695489] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:17.002 [2024-04-18 19:38:32.695677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145845 ] 00:45:17.002 [2024-04-18 19:38:32.863531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:17.259 [2024-04-18 19:38:33.124673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:19.236  Copying: 1024/1024 [kB] (average 1000 MBps) 00:45:19.236 00:45:19.236 19:38:34 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:19.236 00:45:19.236 real 0m55.330s 00:45:19.236 user 0m47.709s 00:45:19.236 sys 0m5.975s 00:45:19.236 ************************************ 00:45:19.236 END TEST spdk_dd_basic_rw 00:45:19.236 19:38:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:19.236 19:38:34 -- common/autotest_common.sh@10 -- # set +x 00:45:19.237 ************************************ 00:45:19.237 19:38:34 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:45:19.237 19:38:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:45:19.237 19:38:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:19.237 19:38:34 -- common/autotest_common.sh@10 -- # set +x 00:45:19.237 ************************************ 00:45:19.237 START TEST spdk_dd_posix 00:45:19.237 ************************************ 00:45:19.237 19:38:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:45:19.237 * Looking for test storage... 00:45:19.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:45:19.237 19:38:35 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:19.237 19:38:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:19.237 19:38:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:19.237 19:38:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:19.237 19:38:35 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:19.237 19:38:35 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:19.237 19:38:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:19.237 19:38:35 -- paths/export.sh@5 -- # export PATH 00:45:19.237 19:38:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:19.237 19:38:35 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:45:19.237 19:38:35 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:45:19.237 19:38:35 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:45:19.237 19:38:35 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:45:19.237 19:38:35 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:19.237 19:38:35 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:19.237 19:38:35 -- dd/posix.sh@130 -- # tests 00:45:19.237 19:38:35 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:45:19.237 * First test run, using AIO 00:45:19.237 19:38:35 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:45:19.237 19:38:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:45:19.237 19:38:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:19.237 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:45:19.237 ************************************ 00:45:19.237 START TEST dd_flag_append 00:45:19.237 ************************************ 00:45:19.237 19:38:35 -- common/autotest_common.sh@1111 -- # append 00:45:19.237 19:38:35 -- dd/posix.sh@16 -- # local dump0 00:45:19.237 19:38:35 -- dd/posix.sh@17 -- # local dump1 00:45:19.237 19:38:35 -- dd/posix.sh@19 -- # gen_bytes 32 00:45:19.237 19:38:35 -- dd/common.sh@98 -- # xtrace_disable 00:45:19.237 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:45:19.237 19:38:35 -- dd/posix.sh@19 -- # dump0=1qp7xxb2gwy9iuiyyixalo6w11oqqfh8 00:45:19.237 19:38:35 -- dd/posix.sh@20 -- # gen_bytes 32 00:45:19.237 19:38:35 -- dd/common.sh@98 -- # xtrace_disable 00:45:19.237 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:45:19.237 19:38:35 -- dd/posix.sh@20 -- # dump1=ueudnxgn7e6wei3y9iou55lvexc2d4zv 00:45:19.237 19:38:35 -- dd/posix.sh@22 -- # printf %s 1qp7xxb2gwy9iuiyyixalo6w11oqqfh8 00:45:19.237 19:38:35 -- dd/posix.sh@23 -- # printf %s ueudnxgn7e6wei3y9iou55lvexc2d4zv 00:45:19.237 19:38:35 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:45:19.496 [2024-04-18 19:38:35.174786] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:19.496 [2024-04-18 19:38:35.174998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145941 ] 00:45:19.496 [2024-04-18 19:38:35.353620] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:19.754 [2024-04-18 19:38:35.563748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:21.468  Copying: 32/32 [B] (average 31 kBps) 00:45:21.468 00:45:21.468 19:38:37 -- dd/posix.sh@27 -- # [[ ueudnxgn7e6wei3y9iou55lvexc2d4zv1qp7xxb2gwy9iuiyyixalo6w11oqqfh8 == \u\e\u\d\n\x\g\n\7\e\6\w\e\i\3\y\9\i\o\u\5\5\l\v\e\x\c\2\d\4\z\v\1\q\p\7\x\x\b\2\g\w\y\9\i\u\i\y\y\i\x\a\l\o\6\w\1\1\o\q\q\f\h\8 ]] 00:45:21.468 ************************************ 00:45:21.468 END TEST dd_flag_append 00:45:21.468 ************************************ 00:45:21.468 00:45:21.468 real 0m2.223s 00:45:21.468 user 0m1.875s 00:45:21.468 sys 0m0.219s 00:45:21.468 19:38:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:21.468 19:38:37 -- common/autotest_common.sh@10 -- # set +x 00:45:21.728 19:38:37 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:45:21.728 19:38:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:45:21.728 19:38:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:21.728 19:38:37 -- common/autotest_common.sh@10 -- # set +x 00:45:21.728 ************************************ 00:45:21.728 START TEST dd_flag_directory 00:45:21.728 ************************************ 00:45:21.728 19:38:37 -- common/autotest_common.sh@1111 -- # directory 00:45:21.728 19:38:37 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:21.728 19:38:37 -- common/autotest_common.sh@638 -- # local es=0 00:45:21.728 19:38:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:21.728 19:38:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:21.728 19:38:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:21.728 19:38:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:21.728 19:38:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:21.729 19:38:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:21.729 19:38:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:21.729 19:38:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:21.729 19:38:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:21.729 19:38:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:21.729 [2024-04-18 19:38:37.486047] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:21.729 [2024-04-18 19:38:37.486305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145998 ] 00:45:21.987 [2024-04-18 19:38:37.676087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:22.245 [2024-04-18 19:38:37.919347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:22.503 [2024-04-18 19:38:38.266433] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:45:22.503 [2024-04-18 19:38:38.266557] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:45:22.503 [2024-04-18 19:38:38.266585] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:23.439 [2024-04-18 19:38:39.171282] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:24.007 19:38:39 -- common/autotest_common.sh@641 -- # es=236 00:45:24.007 19:38:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:45:24.007 19:38:39 -- common/autotest_common.sh@650 -- # es=108 00:45:24.007 19:38:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:45:24.007 19:38:39 -- common/autotest_common.sh@658 -- # es=1 00:45:24.007 19:38:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:45:24.007 19:38:39 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:45:24.007 19:38:39 -- common/autotest_common.sh@638 -- # local es=0 00:45:24.007 19:38:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:45:24.007 19:38:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:24.007 19:38:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:24.007 19:38:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:24.007 19:38:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:24.007 19:38:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:24.007 19:38:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:24.007 19:38:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:24.007 19:38:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:24.007 19:38:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:45:24.007 [2024-04-18 19:38:39.707409] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:24.007 [2024-04-18 19:38:39.707556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146030 ] 00:45:24.007 [2024-04-18 19:38:39.873616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:24.266 [2024-04-18 19:38:40.097511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:24.832 [2024-04-18 19:38:40.464458] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:45:24.832 [2024-04-18 19:38:40.464541] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:45:24.832 [2024-04-18 19:38:40.464568] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:25.769 [2024-04-18 19:38:41.373431] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:26.027 19:38:41 -- common/autotest_common.sh@641 -- # es=236 00:45:26.027 19:38:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:45:26.027 19:38:41 -- common/autotest_common.sh@650 -- # es=108 00:45:26.027 19:38:41 -- common/autotest_common.sh@651 -- # case "$es" in 00:45:26.027 19:38:41 -- common/autotest_common.sh@658 -- # es=1 00:45:26.027 19:38:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:45:26.027 00:45:26.027 real 0m4.455s 00:45:26.027 user 0m3.809s 00:45:26.027 sys 0m0.445s 00:45:26.027 19:38:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:26.027 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:45:26.027 ************************************ 00:45:26.027 END TEST dd_flag_directory 00:45:26.027 ************************************ 00:45:26.027 19:38:41 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:45:26.027 19:38:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:45:26.027 19:38:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:26.027 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:45:26.027 ************************************ 00:45:26.027 START TEST dd_flag_nofollow 00:45:26.027 ************************************ 00:45:26.027 19:38:41 -- common/autotest_common.sh@1111 -- # nofollow 00:45:26.027 19:38:41 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:45:26.027 19:38:41 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:45:26.027 19:38:41 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:45:26.027 19:38:41 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:45:26.027 19:38:41 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:26.027 19:38:41 -- common/autotest_common.sh@638 -- # local es=0 00:45:26.027 19:38:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:26.027 19:38:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:26.028 19:38:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:26.028 19:38:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:26.028 19:38:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:26.028 19:38:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:26.028 19:38:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:26.028 19:38:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:26.028 19:38:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:26.028 19:38:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:26.286 [2024-04-18 19:38:42.013897] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:26.286 [2024-04-18 19:38:42.014327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146107 ] 00:45:26.286 [2024-04-18 19:38:42.193333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:26.543 [2024-04-18 19:38:42.423063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:27.109 [2024-04-18 19:38:42.782602] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:45:27.109 [2024-04-18 19:38:42.782884] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:45:27.109 [2024-04-18 19:38:42.783018] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:28.041 [2024-04-18 19:38:43.780095] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:28.608 19:38:44 -- common/autotest_common.sh@641 -- # es=216 00:45:28.608 19:38:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:45:28.608 19:38:44 -- common/autotest_common.sh@650 -- # es=88 00:45:28.608 19:38:44 -- common/autotest_common.sh@651 -- # case "$es" in 00:45:28.608 19:38:44 -- common/autotest_common.sh@658 -- # es=1 00:45:28.608 19:38:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:45:28.608 19:38:44 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:45:28.608 19:38:44 -- common/autotest_common.sh@638 -- # local es=0 00:45:28.608 19:38:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:45:28.608 19:38:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.608 19:38:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:28.608 19:38:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.608 19:38:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:28.608 19:38:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.608 19:38:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:28.608 19:38:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.608 19:38:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:28.608 19:38:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:45:28.608 [2024-04-18 19:38:44.328957] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:28.608 [2024-04-18 19:38:44.329357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146138 ] 00:45:28.608 [2024-04-18 19:38:44.495060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:28.866 [2024-04-18 19:38:44.711900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:29.434 [2024-04-18 19:38:45.060391] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:45:29.434 [2024-04-18 19:38:45.060679] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:45:29.434 [2024-04-18 19:38:45.060849] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:30.378 [2024-04-18 19:38:45.970484] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:30.635 19:38:46 -- common/autotest_common.sh@641 -- # es=216 00:45:30.635 19:38:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:45:30.635 19:38:46 -- common/autotest_common.sh@650 -- # es=88 00:45:30.635 19:38:46 -- common/autotest_common.sh@651 -- # case "$es" in 00:45:30.635 19:38:46 -- common/autotest_common.sh@658 -- # es=1 00:45:30.635 19:38:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:45:30.635 19:38:46 -- dd/posix.sh@46 -- # gen_bytes 512 00:45:30.635 19:38:46 -- dd/common.sh@98 -- # xtrace_disable 00:45:30.635 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:45:30.635 19:38:46 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:30.635 [2024-04-18 19:38:46.513780] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:30.635 [2024-04-18 19:38:46.514204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146164 ] 00:45:30.893 [2024-04-18 19:38:46.680111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:31.151 [2024-04-18 19:38:46.901632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:32.815  Copying: 512/512 [B] (average 500 kBps) 00:45:32.815 00:45:32.815 ************************************ 00:45:32.815 END TEST dd_flag_nofollow 00:45:32.815 ************************************ 00:45:32.815 19:38:48 -- dd/posix.sh@49 -- # [[ ijkoja90zt958srkixmajdu38f60gx3pr2jzubnfzyyxhw19gr35mpt3wciatforx4ndhyu2wg22336mquu0n2dtci7e43g6kk8aazvx82r0bql4w1tuh6remw5s0sjhklglgaplvdnbb4f1zgvos99uhuhcve29z4wduo36vuj89wah79e7kiyrpchyldlhel0khv16wgbnn5lsw8kl9o7r1c32u87ht61448pe7w0qzc8hkxrwsadeoixs4v3kr8bjracce2qeqfgyc8ss5g98k9ch13uw1vzds97xwgfi6s8mi77kc1n9q4nn5p39jiux2dx198me7r5jfhgdln6o11zq173qs0qeh3n10urrj0gz2i04q2xdp22c8xk0bkkotqxdrmav820kralw7ohe50nj1gdplpfsh7vre5e7yqxloh8r7mrkpyqnliuvpt12510chvwsaxopdfyol3yu7mgy2vgyutbzsm1p1p1sx5f8iguofhv7dvggt30v == \i\j\k\o\j\a\9\0\z\t\9\5\8\s\r\k\i\x\m\a\j\d\u\3\8\f\6\0\g\x\3\p\r\2\j\z\u\b\n\f\z\y\y\x\h\w\1\9\g\r\3\5\m\p\t\3\w\c\i\a\t\f\o\r\x\4\n\d\h\y\u\2\w\g\2\2\3\3\6\m\q\u\u\0\n\2\d\t\c\i\7\e\4\3\g\6\k\k\8\a\a\z\v\x\8\2\r\0\b\q\l\4\w\1\t\u\h\6\r\e\m\w\5\s\0\s\j\h\k\l\g\l\g\a\p\l\v\d\n\b\b\4\f\1\z\g\v\o\s\9\9\u\h\u\h\c\v\e\2\9\z\4\w\d\u\o\3\6\v\u\j\8\9\w\a\h\7\9\e\7\k\i\y\r\p\c\h\y\l\d\l\h\e\l\0\k\h\v\1\6\w\g\b\n\n\5\l\s\w\8\k\l\9\o\7\r\1\c\3\2\u\8\7\h\t\6\1\4\4\8\p\e\7\w\0\q\z\c\8\h\k\x\r\w\s\a\d\e\o\i\x\s\4\v\3\k\r\8\b\j\r\a\c\c\e\2\q\e\q\f\g\y\c\8\s\s\5\g\9\8\k\9\c\h\1\3\u\w\1\v\z\d\s\9\7\x\w\g\f\i\6\s\8\m\i\7\7\k\c\1\n\9\q\4\n\n\5\p\3\9\j\i\u\x\2\d\x\1\9\8\m\e\7\r\5\j\f\h\g\d\l\n\6\o\1\1\z\q\1\7\3\q\s\0\q\e\h\3\n\1\0\u\r\r\j\0\g\z\2\i\0\4\q\2\x\d\p\2\2\c\8\x\k\0\b\k\k\o\t\q\x\d\r\m\a\v\8\2\0\k\r\a\l\w\7\o\h\e\5\0\n\j\1\g\d\p\l\p\f\s\h\7\v\r\e\5\e\7\y\q\x\l\o\h\8\r\7\m\r\k\p\y\q\n\l\i\u\v\p\t\1\2\5\1\0\c\h\v\w\s\a\x\o\p\d\f\y\o\l\3\y\u\7\m\g\y\2\v\g\y\u\t\b\z\s\m\1\p\1\p\1\s\x\5\f\8\i\g\u\o\f\h\v\7\d\v\g\g\t\3\0\v ]] 00:45:32.815 00:45:32.815 real 0m6.773s 00:45:32.815 user 0m5.758s 00:45:32.815 sys 0m0.680s 00:45:32.815 19:38:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:32.815 19:38:48 -- common/autotest_common.sh@10 -- # set +x 00:45:33.074 19:38:48 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:45:33.074 19:38:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:45:33.074 19:38:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:33.074 19:38:48 -- common/autotest_common.sh@10 -- # set +x 00:45:33.074 ************************************ 00:45:33.074 START TEST dd_flag_noatime 00:45:33.074 ************************************ 00:45:33.074 19:38:48 -- common/autotest_common.sh@1111 -- # noatime 00:45:33.074 19:38:48 -- dd/posix.sh@53 -- # local atime_if 00:45:33.074 19:38:48 -- dd/posix.sh@54 -- # local atime_of 00:45:33.074 19:38:48 -- dd/posix.sh@58 -- # gen_bytes 512 00:45:33.074 19:38:48 -- dd/common.sh@98 -- # xtrace_disable 00:45:33.074 19:38:48 -- common/autotest_common.sh@10 -- # set +x 00:45:33.074 19:38:48 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:33.074 19:38:48 -- dd/posix.sh@60 -- # atime_if=1713469127 00:45:33.074 19:38:48 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:33.074 19:38:48 -- dd/posix.sh@61 -- # atime_of=1713469128 00:45:33.074 19:38:48 -- dd/posix.sh@66 -- # sleep 1 00:45:34.008 19:38:49 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:34.008 [2024-04-18 19:38:49.915937] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:34.008 [2024-04-18 19:38:49.916504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146233 ] 00:45:34.267 [2024-04-18 19:38:50.108474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:34.525 [2024-04-18 19:38:50.364786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:36.464  Copying: 512/512 [B] (average 500 kBps) 00:45:36.464 00:45:36.464 19:38:52 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:36.464 19:38:52 -- dd/posix.sh@69 -- # (( atime_if == 1713469127 )) 00:45:36.464 19:38:52 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:36.464 19:38:52 -- dd/posix.sh@70 -- # (( atime_of == 1713469128 )) 00:45:36.464 19:38:52 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:36.464 [2024-04-18 19:38:52.240565] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:36.464 [2024-04-18 19:38:52.241095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146289 ] 00:45:36.722 [2024-04-18 19:38:52.423768] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:36.980 [2024-04-18 19:38:52.648734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:38.661  Copying: 512/512 [B] (average 500 kBps) 00:45:38.661 00:45:38.661 19:38:54 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:38.661 ************************************ 00:45:38.661 END TEST dd_flag_noatime 00:45:38.661 ************************************ 00:45:38.661 19:38:54 -- dd/posix.sh@73 -- # (( atime_if < 1713469133 )) 00:45:38.661 00:45:38.661 real 0m5.645s 00:45:38.661 user 0m3.924s 00:45:38.661 sys 0m0.453s 00:45:38.661 19:38:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:38.661 19:38:54 -- common/autotest_common.sh@10 -- # set +x 00:45:38.661 19:38:54 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:45:38.661 19:38:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:45:38.661 19:38:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:38.661 19:38:54 -- common/autotest_common.sh@10 -- # set +x 00:45:38.661 ************************************ 00:45:38.661 START TEST dd_flags_misc 00:45:38.661 ************************************ 00:45:38.661 19:38:54 -- common/autotest_common.sh@1111 -- # io 00:45:38.661 19:38:54 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:45:38.661 19:38:54 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:45:38.661 19:38:54 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:45:38.661 19:38:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:45:38.661 19:38:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:45:38.661 19:38:54 -- dd/common.sh@98 -- # xtrace_disable 00:45:38.661 19:38:54 -- common/autotest_common.sh@10 -- # set +x 00:45:38.661 19:38:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:38.661 19:38:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:45:38.918 [2024-04-18 19:38:54.622553] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:38.918 [2024-04-18 19:38:54.623015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146342 ] 00:45:38.919 [2024-04-18 19:38:54.803070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:39.178 [2024-04-18 19:38:55.021865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:41.119  Copying: 512/512 [B] (average 500 kBps) 00:45:41.120 00:45:41.120 19:38:56 -- dd/posix.sh@93 -- # [[ ho320ierl6hkcrwdvrqazcm2ea9t5k0zcpwzhvm987wzorspv6f1nu85sgoaco2odisr1a1bd3bz3zu8iw8qcjatlapa0d2vumlhuiuup1jlv1g06mx7ihmcy3z71einyvx5gahz41az1z6z5ke8mednm1jcpbf9ricpnrl4mu12iu3p1v7izo2abiie2qayzeysn2g38edz0nbxu2alecwy8r9a0nh5e3b3evtjghclggldrpnzk8ijry8gtg30z4b2itza5g78m4lhghqjvjiw3blkh9lvl56oswct751h3c5dzqus4encm1zfnrry6865u72guet478rsm41gwlc3d5xvwwewiws7126x5qpgzd4y185lv5dh4y36qhcryk6jfxo7kfhmczjggxweh0hkaa9hd0ya66mogaarvgc9ees772cja3nf9klln4rtnb0uiu4ibfyc0xx6zk61qa7mpc6glhmj57w5h841wwpw5niuoa5ug4gz1nqonszk == \h\o\3\2\0\i\e\r\l\6\h\k\c\r\w\d\v\r\q\a\z\c\m\2\e\a\9\t\5\k\0\z\c\p\w\z\h\v\m\9\8\7\w\z\o\r\s\p\v\6\f\1\n\u\8\5\s\g\o\a\c\o\2\o\d\i\s\r\1\a\1\b\d\3\b\z\3\z\u\8\i\w\8\q\c\j\a\t\l\a\p\a\0\d\2\v\u\m\l\h\u\i\u\u\p\1\j\l\v\1\g\0\6\m\x\7\i\h\m\c\y\3\z\7\1\e\i\n\y\v\x\5\g\a\h\z\4\1\a\z\1\z\6\z\5\k\e\8\m\e\d\n\m\1\j\c\p\b\f\9\r\i\c\p\n\r\l\4\m\u\1\2\i\u\3\p\1\v\7\i\z\o\2\a\b\i\i\e\2\q\a\y\z\e\y\s\n\2\g\3\8\e\d\z\0\n\b\x\u\2\a\l\e\c\w\y\8\r\9\a\0\n\h\5\e\3\b\3\e\v\t\j\g\h\c\l\g\g\l\d\r\p\n\z\k\8\i\j\r\y\8\g\t\g\3\0\z\4\b\2\i\t\z\a\5\g\7\8\m\4\l\h\g\h\q\j\v\j\i\w\3\b\l\k\h\9\l\v\l\5\6\o\s\w\c\t\7\5\1\h\3\c\5\d\z\q\u\s\4\e\n\c\m\1\z\f\n\r\r\y\6\8\6\5\u\7\2\g\u\e\t\4\7\8\r\s\m\4\1\g\w\l\c\3\d\5\x\v\w\w\e\w\i\w\s\7\1\2\6\x\5\q\p\g\z\d\4\y\1\8\5\l\v\5\d\h\4\y\3\6\q\h\c\r\y\k\6\j\f\x\o\7\k\f\h\m\c\z\j\g\g\x\w\e\h\0\h\k\a\a\9\h\d\0\y\a\6\6\m\o\g\a\a\r\v\g\c\9\e\e\s\7\7\2\c\j\a\3\n\f\9\k\l\l\n\4\r\t\n\b\0\u\i\u\4\i\b\f\y\c\0\x\x\6\z\k\6\1\q\a\7\m\p\c\6\g\l\h\m\j\5\7\w\5\h\8\4\1\w\w\p\w\5\n\i\u\o\a\5\u\g\4\g\z\1\n\q\o\n\s\z\k ]] 00:45:41.120 19:38:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:41.120 19:38:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:45:41.120 [2024-04-18 19:38:56.882995] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:41.120 [2024-04-18 19:38:56.883456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146369 ] 00:45:41.378 [2024-04-18 19:38:57.067000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:41.637 [2024-04-18 19:38:57.350880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:43.271  Copying: 512/512 [B] (average 500 kBps) 00:45:43.271 00:45:43.271 19:38:59 -- dd/posix.sh@93 -- # [[ ho320ierl6hkcrwdvrqazcm2ea9t5k0zcpwzhvm987wzorspv6f1nu85sgoaco2odisr1a1bd3bz3zu8iw8qcjatlapa0d2vumlhuiuup1jlv1g06mx7ihmcy3z71einyvx5gahz41az1z6z5ke8mednm1jcpbf9ricpnrl4mu12iu3p1v7izo2abiie2qayzeysn2g38edz0nbxu2alecwy8r9a0nh5e3b3evtjghclggldrpnzk8ijry8gtg30z4b2itza5g78m4lhghqjvjiw3blkh9lvl56oswct751h3c5dzqus4encm1zfnrry6865u72guet478rsm41gwlc3d5xvwwewiws7126x5qpgzd4y185lv5dh4y36qhcryk6jfxo7kfhmczjggxweh0hkaa9hd0ya66mogaarvgc9ees772cja3nf9klln4rtnb0uiu4ibfyc0xx6zk61qa7mpc6glhmj57w5h841wwpw5niuoa5ug4gz1nqonszk == \h\o\3\2\0\i\e\r\l\6\h\k\c\r\w\d\v\r\q\a\z\c\m\2\e\a\9\t\5\k\0\z\c\p\w\z\h\v\m\9\8\7\w\z\o\r\s\p\v\6\f\1\n\u\8\5\s\g\o\a\c\o\2\o\d\i\s\r\1\a\1\b\d\3\b\z\3\z\u\8\i\w\8\q\c\j\a\t\l\a\p\a\0\d\2\v\u\m\l\h\u\i\u\u\p\1\j\l\v\1\g\0\6\m\x\7\i\h\m\c\y\3\z\7\1\e\i\n\y\v\x\5\g\a\h\z\4\1\a\z\1\z\6\z\5\k\e\8\m\e\d\n\m\1\j\c\p\b\f\9\r\i\c\p\n\r\l\4\m\u\1\2\i\u\3\p\1\v\7\i\z\o\2\a\b\i\i\e\2\q\a\y\z\e\y\s\n\2\g\3\8\e\d\z\0\n\b\x\u\2\a\l\e\c\w\y\8\r\9\a\0\n\h\5\e\3\b\3\e\v\t\j\g\h\c\l\g\g\l\d\r\p\n\z\k\8\i\j\r\y\8\g\t\g\3\0\z\4\b\2\i\t\z\a\5\g\7\8\m\4\l\h\g\h\q\j\v\j\i\w\3\b\l\k\h\9\l\v\l\5\6\o\s\w\c\t\7\5\1\h\3\c\5\d\z\q\u\s\4\e\n\c\m\1\z\f\n\r\r\y\6\8\6\5\u\7\2\g\u\e\t\4\7\8\r\s\m\4\1\g\w\l\c\3\d\5\x\v\w\w\e\w\i\w\s\7\1\2\6\x\5\q\p\g\z\d\4\y\1\8\5\l\v\5\d\h\4\y\3\6\q\h\c\r\y\k\6\j\f\x\o\7\k\f\h\m\c\z\j\g\g\x\w\e\h\0\h\k\a\a\9\h\d\0\y\a\6\6\m\o\g\a\a\r\v\g\c\9\e\e\s\7\7\2\c\j\a\3\n\f\9\k\l\l\n\4\r\t\n\b\0\u\i\u\4\i\b\f\y\c\0\x\x\6\z\k\6\1\q\a\7\m\p\c\6\g\l\h\m\j\5\7\w\5\h\8\4\1\w\w\p\w\5\n\i\u\o\a\5\u\g\4\g\z\1\n\q\o\n\s\z\k ]] 00:45:43.271 19:38:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:43.271 19:38:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:45:43.530 [2024-04-18 19:38:59.208596] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:43.530 [2024-04-18 19:38:59.209045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146398 ] 00:45:43.530 [2024-04-18 19:38:59.393299] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:43.788 [2024-04-18 19:38:59.673810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:45.756  Copying: 512/512 [B] (average 166 kBps) 00:45:45.756 00:45:45.756 19:39:01 -- dd/posix.sh@93 -- # [[ ho320ierl6hkcrwdvrqazcm2ea9t5k0zcpwzhvm987wzorspv6f1nu85sgoaco2odisr1a1bd3bz3zu8iw8qcjatlapa0d2vumlhuiuup1jlv1g06mx7ihmcy3z71einyvx5gahz41az1z6z5ke8mednm1jcpbf9ricpnrl4mu12iu3p1v7izo2abiie2qayzeysn2g38edz0nbxu2alecwy8r9a0nh5e3b3evtjghclggldrpnzk8ijry8gtg30z4b2itza5g78m4lhghqjvjiw3blkh9lvl56oswct751h3c5dzqus4encm1zfnrry6865u72guet478rsm41gwlc3d5xvwwewiws7126x5qpgzd4y185lv5dh4y36qhcryk6jfxo7kfhmczjggxweh0hkaa9hd0ya66mogaarvgc9ees772cja3nf9klln4rtnb0uiu4ibfyc0xx6zk61qa7mpc6glhmj57w5h841wwpw5niuoa5ug4gz1nqonszk == \h\o\3\2\0\i\e\r\l\6\h\k\c\r\w\d\v\r\q\a\z\c\m\2\e\a\9\t\5\k\0\z\c\p\w\z\h\v\m\9\8\7\w\z\o\r\s\p\v\6\f\1\n\u\8\5\s\g\o\a\c\o\2\o\d\i\s\r\1\a\1\b\d\3\b\z\3\z\u\8\i\w\8\q\c\j\a\t\l\a\p\a\0\d\2\v\u\m\l\h\u\i\u\u\p\1\j\l\v\1\g\0\6\m\x\7\i\h\m\c\y\3\z\7\1\e\i\n\y\v\x\5\g\a\h\z\4\1\a\z\1\z\6\z\5\k\e\8\m\e\d\n\m\1\j\c\p\b\f\9\r\i\c\p\n\r\l\4\m\u\1\2\i\u\3\p\1\v\7\i\z\o\2\a\b\i\i\e\2\q\a\y\z\e\y\s\n\2\g\3\8\e\d\z\0\n\b\x\u\2\a\l\e\c\w\y\8\r\9\a\0\n\h\5\e\3\b\3\e\v\t\j\g\h\c\l\g\g\l\d\r\p\n\z\k\8\i\j\r\y\8\g\t\g\3\0\z\4\b\2\i\t\z\a\5\g\7\8\m\4\l\h\g\h\q\j\v\j\i\w\3\b\l\k\h\9\l\v\l\5\6\o\s\w\c\t\7\5\1\h\3\c\5\d\z\q\u\s\4\e\n\c\m\1\z\f\n\r\r\y\6\8\6\5\u\7\2\g\u\e\t\4\7\8\r\s\m\4\1\g\w\l\c\3\d\5\x\v\w\w\e\w\i\w\s\7\1\2\6\x\5\q\p\g\z\d\4\y\1\8\5\l\v\5\d\h\4\y\3\6\q\h\c\r\y\k\6\j\f\x\o\7\k\f\h\m\c\z\j\g\g\x\w\e\h\0\h\k\a\a\9\h\d\0\y\a\6\6\m\o\g\a\a\r\v\g\c\9\e\e\s\7\7\2\c\j\a\3\n\f\9\k\l\l\n\4\r\t\n\b\0\u\i\u\4\i\b\f\y\c\0\x\x\6\z\k\6\1\q\a\7\m\p\c\6\g\l\h\m\j\5\7\w\5\h\8\4\1\w\w\p\w\5\n\i\u\o\a\5\u\g\4\g\z\1\n\q\o\n\s\z\k ]] 00:45:45.756 19:39:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:45.756 19:39:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:45:45.756 [2024-04-18 19:39:01.539922] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:45.756 [2024-04-18 19:39:01.540219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146443 ] 00:45:46.014 [2024-04-18 19:39:01.717838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:46.014 [2024-04-18 19:39:01.932466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:47.955  Copying: 512/512 [B] (average 250 kBps) 00:45:47.955 00:45:47.955 19:39:03 -- dd/posix.sh@93 -- # [[ ho320ierl6hkcrwdvrqazcm2ea9t5k0zcpwzhvm987wzorspv6f1nu85sgoaco2odisr1a1bd3bz3zu8iw8qcjatlapa0d2vumlhuiuup1jlv1g06mx7ihmcy3z71einyvx5gahz41az1z6z5ke8mednm1jcpbf9ricpnrl4mu12iu3p1v7izo2abiie2qayzeysn2g38edz0nbxu2alecwy8r9a0nh5e3b3evtjghclggldrpnzk8ijry8gtg30z4b2itza5g78m4lhghqjvjiw3blkh9lvl56oswct751h3c5dzqus4encm1zfnrry6865u72guet478rsm41gwlc3d5xvwwewiws7126x5qpgzd4y185lv5dh4y36qhcryk6jfxo7kfhmczjggxweh0hkaa9hd0ya66mogaarvgc9ees772cja3nf9klln4rtnb0uiu4ibfyc0xx6zk61qa7mpc6glhmj57w5h841wwpw5niuoa5ug4gz1nqonszk == \h\o\3\2\0\i\e\r\l\6\h\k\c\r\w\d\v\r\q\a\z\c\m\2\e\a\9\t\5\k\0\z\c\p\w\z\h\v\m\9\8\7\w\z\o\r\s\p\v\6\f\1\n\u\8\5\s\g\o\a\c\o\2\o\d\i\s\r\1\a\1\b\d\3\b\z\3\z\u\8\i\w\8\q\c\j\a\t\l\a\p\a\0\d\2\v\u\m\l\h\u\i\u\u\p\1\j\l\v\1\g\0\6\m\x\7\i\h\m\c\y\3\z\7\1\e\i\n\y\v\x\5\g\a\h\z\4\1\a\z\1\z\6\z\5\k\e\8\m\e\d\n\m\1\j\c\p\b\f\9\r\i\c\p\n\r\l\4\m\u\1\2\i\u\3\p\1\v\7\i\z\o\2\a\b\i\i\e\2\q\a\y\z\e\y\s\n\2\g\3\8\e\d\z\0\n\b\x\u\2\a\l\e\c\w\y\8\r\9\a\0\n\h\5\e\3\b\3\e\v\t\j\g\h\c\l\g\g\l\d\r\p\n\z\k\8\i\j\r\y\8\g\t\g\3\0\z\4\b\2\i\t\z\a\5\g\7\8\m\4\l\h\g\h\q\j\v\j\i\w\3\b\l\k\h\9\l\v\l\5\6\o\s\w\c\t\7\5\1\h\3\c\5\d\z\q\u\s\4\e\n\c\m\1\z\f\n\r\r\y\6\8\6\5\u\7\2\g\u\e\t\4\7\8\r\s\m\4\1\g\w\l\c\3\d\5\x\v\w\w\e\w\i\w\s\7\1\2\6\x\5\q\p\g\z\d\4\y\1\8\5\l\v\5\d\h\4\y\3\6\q\h\c\r\y\k\6\j\f\x\o\7\k\f\h\m\c\z\j\g\g\x\w\e\h\0\h\k\a\a\9\h\d\0\y\a\6\6\m\o\g\a\a\r\v\g\c\9\e\e\s\7\7\2\c\j\a\3\n\f\9\k\l\l\n\4\r\t\n\b\0\u\i\u\4\i\b\f\y\c\0\x\x\6\z\k\6\1\q\a\7\m\p\c\6\g\l\h\m\j\5\7\w\5\h\8\4\1\w\w\p\w\5\n\i\u\o\a\5\u\g\4\g\z\1\n\q\o\n\s\z\k ]] 00:45:47.955 19:39:03 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:45:47.955 19:39:03 -- dd/posix.sh@86 -- # gen_bytes 512 00:45:47.955 19:39:03 -- dd/common.sh@98 -- # xtrace_disable 00:45:47.955 19:39:03 -- common/autotest_common.sh@10 -- # set +x 00:45:47.955 19:39:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:47.955 19:39:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:45:47.955 [2024-04-18 19:39:03.773952] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:47.955 [2024-04-18 19:39:03.774369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146473 ] 00:45:48.214 [2024-04-18 19:39:03.958310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:48.472 [2024-04-18 19:39:04.238834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:50.107  Copying: 512/512 [B] (average 500 kBps) 00:45:50.107 00:45:50.107 19:39:05 -- dd/posix.sh@93 -- # [[ x7c4c5iwklmhwtbkvpogrwmiha71rjddqkibrnhnsledwc0g5clfvccw2t973vlhtf5ynfjfu2i5k97pu2au9jeerp5ztpyunvklj562tp4r7xchgmnenwu17ov819mypbhxg43wdarcr0e3ugzf00y9wp748izbyicyb4rzizc6vjk9664dfpkh9x60ygda2i0bxzcn3jistmeruom9rbnau5kfu2e2279r6affqlz4assgyknqove3c2o2mdey7ahc4w8cblf3qa77qg84mj37c9nvc6xeeife7bk13ljbt39nukaym4cxmv0d4gmmp8lyeyagiqqhrt6qbf1bvvto1egbyxxc13r6ir6lbg7tle1nl7shgsr0hnwbovq7uxh0a67fzqy2otedjck5xb6af0jkqs7hm6xdbjq6khmid6cd57yr23zd36vlfptd8w0u9vid56ic5auqv0tsddxce8o0wpbpy38taxwhfbrkq4flbxjuy06adg4ukdw4 == \x\7\c\4\c\5\i\w\k\l\m\h\w\t\b\k\v\p\o\g\r\w\m\i\h\a\7\1\r\j\d\d\q\k\i\b\r\n\h\n\s\l\e\d\w\c\0\g\5\c\l\f\v\c\c\w\2\t\9\7\3\v\l\h\t\f\5\y\n\f\j\f\u\2\i\5\k\9\7\p\u\2\a\u\9\j\e\e\r\p\5\z\t\p\y\u\n\v\k\l\j\5\6\2\t\p\4\r\7\x\c\h\g\m\n\e\n\w\u\1\7\o\v\8\1\9\m\y\p\b\h\x\g\4\3\w\d\a\r\c\r\0\e\3\u\g\z\f\0\0\y\9\w\p\7\4\8\i\z\b\y\i\c\y\b\4\r\z\i\z\c\6\v\j\k\9\6\6\4\d\f\p\k\h\9\x\6\0\y\g\d\a\2\i\0\b\x\z\c\n\3\j\i\s\t\m\e\r\u\o\m\9\r\b\n\a\u\5\k\f\u\2\e\2\2\7\9\r\6\a\f\f\q\l\z\4\a\s\s\g\y\k\n\q\o\v\e\3\c\2\o\2\m\d\e\y\7\a\h\c\4\w\8\c\b\l\f\3\q\a\7\7\q\g\8\4\m\j\3\7\c\9\n\v\c\6\x\e\e\i\f\e\7\b\k\1\3\l\j\b\t\3\9\n\u\k\a\y\m\4\c\x\m\v\0\d\4\g\m\m\p\8\l\y\e\y\a\g\i\q\q\h\r\t\6\q\b\f\1\b\v\v\t\o\1\e\g\b\y\x\x\c\1\3\r\6\i\r\6\l\b\g\7\t\l\e\1\n\l\7\s\h\g\s\r\0\h\n\w\b\o\v\q\7\u\x\h\0\a\6\7\f\z\q\y\2\o\t\e\d\j\c\k\5\x\b\6\a\f\0\j\k\q\s\7\h\m\6\x\d\b\j\q\6\k\h\m\i\d\6\c\d\5\7\y\r\2\3\z\d\3\6\v\l\f\p\t\d\8\w\0\u\9\v\i\d\5\6\i\c\5\a\u\q\v\0\t\s\d\d\x\c\e\8\o\0\w\p\b\p\y\3\8\t\a\x\w\h\f\b\r\k\q\4\f\l\b\x\j\u\y\0\6\a\d\g\4\u\k\d\w\4 ]] 00:45:50.107 19:39:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:50.107 19:39:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:45:50.107 [2024-04-18 19:39:05.984525] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:50.107 [2024-04-18 19:39:05.984719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146502 ] 00:45:50.366 [2024-04-18 19:39:06.164006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:50.625 [2024-04-18 19:39:06.375662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:52.260  Copying: 512/512 [B] (average 500 kBps) 00:45:52.260 00:45:52.260 19:39:08 -- dd/posix.sh@93 -- # [[ x7c4c5iwklmhwtbkvpogrwmiha71rjddqkibrnhnsledwc0g5clfvccw2t973vlhtf5ynfjfu2i5k97pu2au9jeerp5ztpyunvklj562tp4r7xchgmnenwu17ov819mypbhxg43wdarcr0e3ugzf00y9wp748izbyicyb4rzizc6vjk9664dfpkh9x60ygda2i0bxzcn3jistmeruom9rbnau5kfu2e2279r6affqlz4assgyknqove3c2o2mdey7ahc4w8cblf3qa77qg84mj37c9nvc6xeeife7bk13ljbt39nukaym4cxmv0d4gmmp8lyeyagiqqhrt6qbf1bvvto1egbyxxc13r6ir6lbg7tle1nl7shgsr0hnwbovq7uxh0a67fzqy2otedjck5xb6af0jkqs7hm6xdbjq6khmid6cd57yr23zd36vlfptd8w0u9vid56ic5auqv0tsddxce8o0wpbpy38taxwhfbrkq4flbxjuy06adg4ukdw4 == \x\7\c\4\c\5\i\w\k\l\m\h\w\t\b\k\v\p\o\g\r\w\m\i\h\a\7\1\r\j\d\d\q\k\i\b\r\n\h\n\s\l\e\d\w\c\0\g\5\c\l\f\v\c\c\w\2\t\9\7\3\v\l\h\t\f\5\y\n\f\j\f\u\2\i\5\k\9\7\p\u\2\a\u\9\j\e\e\r\p\5\z\t\p\y\u\n\v\k\l\j\5\6\2\t\p\4\r\7\x\c\h\g\m\n\e\n\w\u\1\7\o\v\8\1\9\m\y\p\b\h\x\g\4\3\w\d\a\r\c\r\0\e\3\u\g\z\f\0\0\y\9\w\p\7\4\8\i\z\b\y\i\c\y\b\4\r\z\i\z\c\6\v\j\k\9\6\6\4\d\f\p\k\h\9\x\6\0\y\g\d\a\2\i\0\b\x\z\c\n\3\j\i\s\t\m\e\r\u\o\m\9\r\b\n\a\u\5\k\f\u\2\e\2\2\7\9\r\6\a\f\f\q\l\z\4\a\s\s\g\y\k\n\q\o\v\e\3\c\2\o\2\m\d\e\y\7\a\h\c\4\w\8\c\b\l\f\3\q\a\7\7\q\g\8\4\m\j\3\7\c\9\n\v\c\6\x\e\e\i\f\e\7\b\k\1\3\l\j\b\t\3\9\n\u\k\a\y\m\4\c\x\m\v\0\d\4\g\m\m\p\8\l\y\e\y\a\g\i\q\q\h\r\t\6\q\b\f\1\b\v\v\t\o\1\e\g\b\y\x\x\c\1\3\r\6\i\r\6\l\b\g\7\t\l\e\1\n\l\7\s\h\g\s\r\0\h\n\w\b\o\v\q\7\u\x\h\0\a\6\7\f\z\q\y\2\o\t\e\d\j\c\k\5\x\b\6\a\f\0\j\k\q\s\7\h\m\6\x\d\b\j\q\6\k\h\m\i\d\6\c\d\5\7\y\r\2\3\z\d\3\6\v\l\f\p\t\d\8\w\0\u\9\v\i\d\5\6\i\c\5\a\u\q\v\0\t\s\d\d\x\c\e\8\o\0\w\p\b\p\y\3\8\t\a\x\w\h\f\b\r\k\q\4\f\l\b\x\j\u\y\0\6\a\d\g\4\u\k\d\w\4 ]] 00:45:52.260 19:39:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:52.260 19:39:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:45:52.260 [2024-04-18 19:39:08.160195] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:52.260 [2024-04-18 19:39:08.160393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146530 ] 00:45:52.518 [2024-04-18 19:39:08.337348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:52.777 [2024-04-18 19:39:08.597823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:54.413  Copying: 512/512 [B] (average 166 kBps) 00:45:54.413 00:45:54.413 19:39:10 -- dd/posix.sh@93 -- # [[ x7c4c5iwklmhwtbkvpogrwmiha71rjddqkibrnhnsledwc0g5clfvccw2t973vlhtf5ynfjfu2i5k97pu2au9jeerp5ztpyunvklj562tp4r7xchgmnenwu17ov819mypbhxg43wdarcr0e3ugzf00y9wp748izbyicyb4rzizc6vjk9664dfpkh9x60ygda2i0bxzcn3jistmeruom9rbnau5kfu2e2279r6affqlz4assgyknqove3c2o2mdey7ahc4w8cblf3qa77qg84mj37c9nvc6xeeife7bk13ljbt39nukaym4cxmv0d4gmmp8lyeyagiqqhrt6qbf1bvvto1egbyxxc13r6ir6lbg7tle1nl7shgsr0hnwbovq7uxh0a67fzqy2otedjck5xb6af0jkqs7hm6xdbjq6khmid6cd57yr23zd36vlfptd8w0u9vid56ic5auqv0tsddxce8o0wpbpy38taxwhfbrkq4flbxjuy06adg4ukdw4 == \x\7\c\4\c\5\i\w\k\l\m\h\w\t\b\k\v\p\o\g\r\w\m\i\h\a\7\1\r\j\d\d\q\k\i\b\r\n\h\n\s\l\e\d\w\c\0\g\5\c\l\f\v\c\c\w\2\t\9\7\3\v\l\h\t\f\5\y\n\f\j\f\u\2\i\5\k\9\7\p\u\2\a\u\9\j\e\e\r\p\5\z\t\p\y\u\n\v\k\l\j\5\6\2\t\p\4\r\7\x\c\h\g\m\n\e\n\w\u\1\7\o\v\8\1\9\m\y\p\b\h\x\g\4\3\w\d\a\r\c\r\0\e\3\u\g\z\f\0\0\y\9\w\p\7\4\8\i\z\b\y\i\c\y\b\4\r\z\i\z\c\6\v\j\k\9\6\6\4\d\f\p\k\h\9\x\6\0\y\g\d\a\2\i\0\b\x\z\c\n\3\j\i\s\t\m\e\r\u\o\m\9\r\b\n\a\u\5\k\f\u\2\e\2\2\7\9\r\6\a\f\f\q\l\z\4\a\s\s\g\y\k\n\q\o\v\e\3\c\2\o\2\m\d\e\y\7\a\h\c\4\w\8\c\b\l\f\3\q\a\7\7\q\g\8\4\m\j\3\7\c\9\n\v\c\6\x\e\e\i\f\e\7\b\k\1\3\l\j\b\t\3\9\n\u\k\a\y\m\4\c\x\m\v\0\d\4\g\m\m\p\8\l\y\e\y\a\g\i\q\q\h\r\t\6\q\b\f\1\b\v\v\t\o\1\e\g\b\y\x\x\c\1\3\r\6\i\r\6\l\b\g\7\t\l\e\1\n\l\7\s\h\g\s\r\0\h\n\w\b\o\v\q\7\u\x\h\0\a\6\7\f\z\q\y\2\o\t\e\d\j\c\k\5\x\b\6\a\f\0\j\k\q\s\7\h\m\6\x\d\b\j\q\6\k\h\m\i\d\6\c\d\5\7\y\r\2\3\z\d\3\6\v\l\f\p\t\d\8\w\0\u\9\v\i\d\5\6\i\c\5\a\u\q\v\0\t\s\d\d\x\c\e\8\o\0\w\p\b\p\y\3\8\t\a\x\w\h\f\b\r\k\q\4\f\l\b\x\j\u\y\0\6\a\d\g\4\u\k\d\w\4 ]] 00:45:54.413 19:39:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:54.413 19:39:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:45:54.671 [2024-04-18 19:39:10.377471] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:54.671 [2024-04-18 19:39:10.378295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146555 ] 00:45:54.671 [2024-04-18 19:39:10.555355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:54.930 [2024-04-18 19:39:10.765962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:56.562  Copying: 512/512 [B] (average 250 kBps) 00:45:56.562 00:45:56.562 ************************************ 00:45:56.563 END TEST dd_flags_misc 00:45:56.563 ************************************ 00:45:56.563 19:39:12 -- dd/posix.sh@93 -- # [[ x7c4c5iwklmhwtbkvpogrwmiha71rjddqkibrnhnsledwc0g5clfvccw2t973vlhtf5ynfjfu2i5k97pu2au9jeerp5ztpyunvklj562tp4r7xchgmnenwu17ov819mypbhxg43wdarcr0e3ugzf00y9wp748izbyicyb4rzizc6vjk9664dfpkh9x60ygda2i0bxzcn3jistmeruom9rbnau5kfu2e2279r6affqlz4assgyknqove3c2o2mdey7ahc4w8cblf3qa77qg84mj37c9nvc6xeeife7bk13ljbt39nukaym4cxmv0d4gmmp8lyeyagiqqhrt6qbf1bvvto1egbyxxc13r6ir6lbg7tle1nl7shgsr0hnwbovq7uxh0a67fzqy2otedjck5xb6af0jkqs7hm6xdbjq6khmid6cd57yr23zd36vlfptd8w0u9vid56ic5auqv0tsddxce8o0wpbpy38taxwhfbrkq4flbxjuy06adg4ukdw4 == \x\7\c\4\c\5\i\w\k\l\m\h\w\t\b\k\v\p\o\g\r\w\m\i\h\a\7\1\r\j\d\d\q\k\i\b\r\n\h\n\s\l\e\d\w\c\0\g\5\c\l\f\v\c\c\w\2\t\9\7\3\v\l\h\t\f\5\y\n\f\j\f\u\2\i\5\k\9\7\p\u\2\a\u\9\j\e\e\r\p\5\z\t\p\y\u\n\v\k\l\j\5\6\2\t\p\4\r\7\x\c\h\g\m\n\e\n\w\u\1\7\o\v\8\1\9\m\y\p\b\h\x\g\4\3\w\d\a\r\c\r\0\e\3\u\g\z\f\0\0\y\9\w\p\7\4\8\i\z\b\y\i\c\y\b\4\r\z\i\z\c\6\v\j\k\9\6\6\4\d\f\p\k\h\9\x\6\0\y\g\d\a\2\i\0\b\x\z\c\n\3\j\i\s\t\m\e\r\u\o\m\9\r\b\n\a\u\5\k\f\u\2\e\2\2\7\9\r\6\a\f\f\q\l\z\4\a\s\s\g\y\k\n\q\o\v\e\3\c\2\o\2\m\d\e\y\7\a\h\c\4\w\8\c\b\l\f\3\q\a\7\7\q\g\8\4\m\j\3\7\c\9\n\v\c\6\x\e\e\i\f\e\7\b\k\1\3\l\j\b\t\3\9\n\u\k\a\y\m\4\c\x\m\v\0\d\4\g\m\m\p\8\l\y\e\y\a\g\i\q\q\h\r\t\6\q\b\f\1\b\v\v\t\o\1\e\g\b\y\x\x\c\1\3\r\6\i\r\6\l\b\g\7\t\l\e\1\n\l\7\s\h\g\s\r\0\h\n\w\b\o\v\q\7\u\x\h\0\a\6\7\f\z\q\y\2\o\t\e\d\j\c\k\5\x\b\6\a\f\0\j\k\q\s\7\h\m\6\x\d\b\j\q\6\k\h\m\i\d\6\c\d\5\7\y\r\2\3\z\d\3\6\v\l\f\p\t\d\8\w\0\u\9\v\i\d\5\6\i\c\5\a\u\q\v\0\t\s\d\d\x\c\e\8\o\0\w\p\b\p\y\3\8\t\a\x\w\h\f\b\r\k\q\4\f\l\b\x\j\u\y\0\6\a\d\g\4\u\k\d\w\4 ]] 00:45:56.563 00:45:56.563 real 0m17.928s 00:45:56.563 user 0m14.929s 00:45:56.563 sys 0m1.939s 00:45:56.563 19:39:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:56.563 19:39:12 -- common/autotest_common.sh@10 -- # set +x 00:45:56.855 19:39:12 -- dd/posix.sh@131 -- # tests_forced_aio 00:45:56.855 19:39:12 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:45:56.855 * Second test run, using AIO 00:45:56.855 19:39:12 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:45:56.855 19:39:12 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:45:56.855 19:39:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:45:56.855 19:39:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:56.855 19:39:12 -- common/autotest_common.sh@10 -- # set +x 00:45:56.855 ************************************ 00:45:56.855 START TEST dd_flag_append_forced_aio 00:45:56.855 ************************************ 00:45:56.855 19:39:12 -- common/autotest_common.sh@1111 -- # append 00:45:56.855 19:39:12 -- dd/posix.sh@16 -- # local dump0 00:45:56.855 19:39:12 -- dd/posix.sh@17 -- # local dump1 00:45:56.855 19:39:12 -- dd/posix.sh@19 -- # gen_bytes 32 00:45:56.855 19:39:12 -- dd/common.sh@98 -- # xtrace_disable 00:45:56.855 19:39:12 -- common/autotest_common.sh@10 -- # set +x 00:45:56.855 19:39:12 -- dd/posix.sh@19 -- # dump0=iomtzfx9kmwzbag2q3gdwh9u6v31ayp4 00:45:56.855 19:39:12 -- dd/posix.sh@20 -- # gen_bytes 32 00:45:56.855 19:39:12 -- dd/common.sh@98 -- # xtrace_disable 00:45:56.855 19:39:12 -- common/autotest_common.sh@10 -- # set +x 00:45:56.855 19:39:12 -- dd/posix.sh@20 -- # dump1=n8f001is105ly7yoltsqeeqkzw2k2m1t 00:45:56.855 19:39:12 -- dd/posix.sh@22 -- # printf %s iomtzfx9kmwzbag2q3gdwh9u6v31ayp4 00:45:56.855 19:39:12 -- dd/posix.sh@23 -- # printf %s n8f001is105ly7yoltsqeeqkzw2k2m1t 00:45:56.855 19:39:12 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:45:56.855 [2024-04-18 19:39:12.621245] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:56.855 [2024-04-18 19:39:12.621463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146631 ] 00:45:57.112 [2024-04-18 19:39:12.803288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:57.370 [2024-04-18 19:39:13.085531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:59.004  Copying: 32/32 [B] (average 31 kBps) 00:45:59.004 00:45:59.004 19:39:14 -- dd/posix.sh@27 -- # [[ n8f001is105ly7yoltsqeeqkzw2k2m1tiomtzfx9kmwzbag2q3gdwh9u6v31ayp4 == \n\8\f\0\0\1\i\s\1\0\5\l\y\7\y\o\l\t\s\q\e\e\q\k\z\w\2\k\2\m\1\t\i\o\m\t\z\f\x\9\k\m\w\z\b\a\g\2\q\3\g\d\w\h\9\u\6\v\3\1\a\y\p\4 ]] 00:45:59.004 00:45:59.004 real 0m2.203s 00:45:59.004 user 0m1.855s 00:45:59.004 sys 0m0.217s 00:45:59.004 19:39:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:59.004 19:39:14 -- common/autotest_common.sh@10 -- # set +x 00:45:59.004 ************************************ 00:45:59.004 END TEST dd_flag_append_forced_aio 00:45:59.004 ************************************ 00:45:59.004 19:39:14 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:45:59.004 19:39:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:45:59.004 19:39:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:59.004 19:39:14 -- common/autotest_common.sh@10 -- # set +x 00:45:59.004 ************************************ 00:45:59.004 START TEST dd_flag_directory_forced_aio 00:45:59.004 ************************************ 00:45:59.004 19:39:14 -- common/autotest_common.sh@1111 -- # directory 00:45:59.004 19:39:14 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:59.004 19:39:14 -- common/autotest_common.sh@638 -- # local es=0 00:45:59.004 19:39:14 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:59.004 19:39:14 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:59.004 19:39:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:59.004 19:39:14 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:59.004 19:39:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:59.004 19:39:14 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:59.004 19:39:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:45:59.004 19:39:14 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:59.004 19:39:14 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:59.004 19:39:14 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:59.004 [2024-04-18 19:39:14.919058] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:45:59.004 [2024-04-18 19:39:14.919242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146689 ] 00:45:59.263 [2024-04-18 19:39:15.096374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:59.521 [2024-04-18 19:39:15.309381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:59.779 [2024-04-18 19:39:15.639467] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:45:59.779 [2024-04-18 19:39:15.639544] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:45:59.780 [2024-04-18 19:39:15.639570] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:00.713 [2024-04-18 19:39:16.522433] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:46:01.305 19:39:16 -- common/autotest_common.sh@641 -- # es=236 00:46:01.305 19:39:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:46:01.305 19:39:16 -- common/autotest_common.sh@650 -- # es=108 00:46:01.305 19:39:16 -- common/autotest_common.sh@651 -- # case "$es" in 00:46:01.305 19:39:16 -- common/autotest_common.sh@658 -- # es=1 00:46:01.305 19:39:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:46:01.305 19:39:16 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:46:01.305 19:39:16 -- common/autotest_common.sh@638 -- # local es=0 00:46:01.305 19:39:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:46:01.305 19:39:16 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:01.305 19:39:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:46:01.305 19:39:17 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:01.305 19:39:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:46:01.305 19:39:17 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:01.305 19:39:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:46:01.305 19:39:17 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:01.305 19:39:17 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:01.305 19:39:17 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:46:01.305 [2024-04-18 19:39:17.083836] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:01.305 [2024-04-18 19:39:17.084077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146716 ] 00:46:01.564 [2024-04-18 19:39:17.264531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:01.822 [2024-04-18 19:39:17.549228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:02.080 [2024-04-18 19:39:17.919119] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:46:02.080 [2024-04-18 19:39:17.919210] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:46:02.080 [2024-04-18 19:39:17.919242] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:03.016 [2024-04-18 19:39:18.817614] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:46:03.583 19:39:19 -- common/autotest_common.sh@641 -- # es=236 00:46:03.583 19:39:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:46:03.583 19:39:19 -- common/autotest_common.sh@650 -- # es=108 00:46:03.583 19:39:19 -- common/autotest_common.sh@651 -- # case "$es" in 00:46:03.583 19:39:19 -- common/autotest_common.sh@658 -- # es=1 00:46:03.583 19:39:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:46:03.583 00:46:03.583 real 0m4.456s 00:46:03.583 user 0m3.807s 00:46:03.583 sys 0m0.447s 00:46:03.583 19:39:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:46:03.583 ************************************ 00:46:03.583 END TEST dd_flag_directory_forced_aio 00:46:03.583 19:39:19 -- common/autotest_common.sh@10 -- # set +x 00:46:03.583 ************************************ 00:46:03.583 19:39:19 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:46:03.583 19:39:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:46:03.583 19:39:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:46:03.583 19:39:19 -- common/autotest_common.sh@10 -- # set +x 00:46:03.583 ************************************ 00:46:03.583 START TEST dd_flag_nofollow_forced_aio 00:46:03.583 ************************************ 00:46:03.583 19:39:19 -- common/autotest_common.sh@1111 -- # nofollow 00:46:03.583 19:39:19 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:46:03.583 19:39:19 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:46:03.583 19:39:19 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:46:03.583 19:39:19 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:46:03.583 19:39:19 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:03.583 19:39:19 -- common/autotest_common.sh@638 -- # local es=0 00:46:03.583 19:39:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:03.583 19:39:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:03.583 19:39:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:46:03.583 19:39:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:03.583 19:39:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:46:03.583 19:39:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:03.583 19:39:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:46:03.583 19:39:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:03.583 19:39:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:03.583 19:39:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:03.583 [2024-04-18 19:39:19.479782] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:03.583 [2024-04-18 19:39:19.479999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146770 ] 00:46:03.842 [2024-04-18 19:39:19.659745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:04.100 [2024-04-18 19:39:19.932487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:04.358 [2024-04-18 19:39:20.276612] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:46:04.358 [2024-04-18 19:39:20.276701] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:46:04.358 [2024-04-18 19:39:20.276732] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:05.293 [2024-04-18 19:39:21.163654] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:46:05.861 19:39:21 -- common/autotest_common.sh@641 -- # es=216 00:46:05.861 19:39:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:46:05.861 19:39:21 -- common/autotest_common.sh@650 -- # es=88 00:46:05.861 19:39:21 -- common/autotest_common.sh@651 -- # case "$es" in 00:46:05.861 19:39:21 -- common/autotest_common.sh@658 -- # es=1 00:46:05.861 19:39:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:46:05.861 19:39:21 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:46:05.861 19:39:21 -- common/autotest_common.sh@638 -- # local es=0 00:46:05.861 19:39:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:46:05.861 19:39:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:05.861 19:39:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:46:05.861 19:39:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:05.861 19:39:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:46:05.861 19:39:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:05.861 19:39:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:46:05.861 19:39:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:05.861 19:39:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:05.861 19:39:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:46:05.861 [2024-04-18 19:39:21.712025] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:05.861 [2024-04-18 19:39:21.713073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146802 ] 00:46:06.120 [2024-04-18 19:39:21.888696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:06.378 [2024-04-18 19:39:22.138918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:06.636 [2024-04-18 19:39:22.469994] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:46:06.636 [2024-04-18 19:39:22.470070] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:46:06.636 [2024-04-18 19:39:22.470097] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:07.568 [2024-04-18 19:39:23.352293] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:46:08.133 19:39:23 -- common/autotest_common.sh@641 -- # es=216 00:46:08.133 19:39:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:46:08.133 19:39:23 -- common/autotest_common.sh@650 -- # es=88 00:46:08.133 19:39:23 -- common/autotest_common.sh@651 -- # case "$es" in 00:46:08.133 19:39:23 -- common/autotest_common.sh@658 -- # es=1 00:46:08.133 19:39:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:46:08.133 19:39:23 -- dd/posix.sh@46 -- # gen_bytes 512 00:46:08.133 19:39:23 -- dd/common.sh@98 -- # xtrace_disable 00:46:08.133 19:39:23 -- common/autotest_common.sh@10 -- # set +x 00:46:08.133 19:39:23 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:08.133 [2024-04-18 19:39:23.908403] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:08.133 [2024-04-18 19:39:23.908591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146849 ] 00:46:08.391 [2024-04-18 19:39:24.087764] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:08.649 [2024-04-18 19:39:24.361739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:10.280  Copying: 512/512 [B] (average 500 kBps) 00:46:10.280 00:46:10.280 19:39:26 -- dd/posix.sh@49 -- # [[ j3ufbqc47rgngkrddg458nzlw376rrchn4pifh8ltkbg2x3i5zrvusz05gd505yfkgj108omu1lu9qph7k08cb744arumntmviz4ldockwv0wpxzgh7hxm7i3859n1jr40w3umv9yi3l3l7e1uch5ojmv9bd4c4qmq3nhxqa6fxhf6vd7ff5h0bkep92khgzbl8wy61yuqpk9g3xqnraxqrm9hp91drlvr37c6bt2reez6t2idgrtti9aueq49xw5l9nnmtl78s6s67sgq8gi4hew3xj3z6q78186k1vsiillb6o2x3c259cqfqxgvcxu2smu285q7rkt0zpss9pyjjo6apqgel3v9mcv1hxipny9d01a1dp8rmmefd249alyg1v0adgkui67tthecjzzt1uqkeq61nf2i0aqbx8vgbalbf7x2iyfgjs59w61flxccya9eceoouoe38zdm7ble7jklrgoibrwjricbg4usqoikp1d1ud7gs5cj8gojg8 == \j\3\u\f\b\q\c\4\7\r\g\n\g\k\r\d\d\g\4\5\8\n\z\l\w\3\7\6\r\r\c\h\n\4\p\i\f\h\8\l\t\k\b\g\2\x\3\i\5\z\r\v\u\s\z\0\5\g\d\5\0\5\y\f\k\g\j\1\0\8\o\m\u\1\l\u\9\q\p\h\7\k\0\8\c\b\7\4\4\a\r\u\m\n\t\m\v\i\z\4\l\d\o\c\k\w\v\0\w\p\x\z\g\h\7\h\x\m\7\i\3\8\5\9\n\1\j\r\4\0\w\3\u\m\v\9\y\i\3\l\3\l\7\e\1\u\c\h\5\o\j\m\v\9\b\d\4\c\4\q\m\q\3\n\h\x\q\a\6\f\x\h\f\6\v\d\7\f\f\5\h\0\b\k\e\p\9\2\k\h\g\z\b\l\8\w\y\6\1\y\u\q\p\k\9\g\3\x\q\n\r\a\x\q\r\m\9\h\p\9\1\d\r\l\v\r\3\7\c\6\b\t\2\r\e\e\z\6\t\2\i\d\g\r\t\t\i\9\a\u\e\q\4\9\x\w\5\l\9\n\n\m\t\l\7\8\s\6\s\6\7\s\g\q\8\g\i\4\h\e\w\3\x\j\3\z\6\q\7\8\1\8\6\k\1\v\s\i\i\l\l\b\6\o\2\x\3\c\2\5\9\c\q\f\q\x\g\v\c\x\u\2\s\m\u\2\8\5\q\7\r\k\t\0\z\p\s\s\9\p\y\j\j\o\6\a\p\q\g\e\l\3\v\9\m\c\v\1\h\x\i\p\n\y\9\d\0\1\a\1\d\p\8\r\m\m\e\f\d\2\4\9\a\l\y\g\1\v\0\a\d\g\k\u\i\6\7\t\t\h\e\c\j\z\z\t\1\u\q\k\e\q\6\1\n\f\2\i\0\a\q\b\x\8\v\g\b\a\l\b\f\7\x\2\i\y\f\g\j\s\5\9\w\6\1\f\l\x\c\c\y\a\9\e\c\e\o\o\u\o\e\3\8\z\d\m\7\b\l\e\7\j\k\l\r\g\o\i\b\r\w\j\r\i\c\b\g\4\u\s\q\o\i\k\p\1\d\1\u\d\7\g\s\5\c\j\8\g\o\j\g\8 ]] 00:46:10.280 00:46:10.280 real 0m6.705s 00:46:10.280 user 0m5.636s 00:46:10.280 sys 0m0.736s 00:46:10.280 19:39:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:46:10.280 ************************************ 00:46:10.280 END TEST dd_flag_nofollow_forced_aio 00:46:10.280 ************************************ 00:46:10.280 19:39:26 -- common/autotest_common.sh@10 -- # set +x 00:46:10.280 19:39:26 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:46:10.280 19:39:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:46:10.280 19:39:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:46:10.280 19:39:26 -- common/autotest_common.sh@10 -- # set +x 00:46:10.280 ************************************ 00:46:10.280 START TEST dd_flag_noatime_forced_aio 00:46:10.280 ************************************ 00:46:10.280 19:39:26 -- common/autotest_common.sh@1111 -- # noatime 00:46:10.280 19:39:26 -- dd/posix.sh@53 -- # local atime_if 00:46:10.280 19:39:26 -- dd/posix.sh@54 -- # local atime_of 00:46:10.280 19:39:26 -- dd/posix.sh@58 -- # gen_bytes 512 00:46:10.280 19:39:26 -- dd/common.sh@98 -- # xtrace_disable 00:46:10.280 19:39:26 -- common/autotest_common.sh@10 -- # set +x 00:46:10.538 19:39:26 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:10.538 19:39:26 -- dd/posix.sh@60 -- # atime_if=1713469164 00:46:10.538 19:39:26 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:10.538 19:39:26 -- dd/posix.sh@61 -- # atime_of=1713469166 00:46:10.538 19:39:26 -- dd/posix.sh@66 -- # sleep 1 00:46:11.471 19:39:27 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:11.471 [2024-04-18 19:39:27.290920] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:11.471 [2024-04-18 19:39:27.291112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146921 ] 00:46:11.728 [2024-04-18 19:39:27.473064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:11.986 [2024-04-18 19:39:27.770611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:13.620  Copying: 512/512 [B] (average 500 kBps) 00:46:13.620 00:46:13.620 19:39:29 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:13.620 19:39:29 -- dd/posix.sh@69 -- # (( atime_if == 1713469164 )) 00:46:13.620 19:39:29 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:13.620 19:39:29 -- dd/posix.sh@70 -- # (( atime_of == 1713469166 )) 00:46:13.620 19:39:29 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:13.620 [2024-04-18 19:39:29.542239] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:13.620 [2024-04-18 19:39:29.542378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146948 ] 00:46:13.878 [2024-04-18 19:39:29.701558] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:14.136 [2024-04-18 19:39:29.921573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:15.771  Copying: 512/512 [B] (average 500 kBps) 00:46:15.771 00:46:15.771 19:39:31 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:15.771 19:39:31 -- dd/posix.sh@73 -- # (( atime_if < 1713469170 )) 00:46:15.771 00:46:15.771 real 0m5.441s 00:46:15.771 user 0m3.676s 00:46:15.771 sys 0m0.496s 00:46:15.771 ************************************ 00:46:15.771 END TEST dd_flag_noatime_forced_aio 00:46:15.771 ************************************ 00:46:15.771 19:39:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:46:15.771 19:39:31 -- common/autotest_common.sh@10 -- # set +x 00:46:15.771 19:39:31 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:46:15.771 19:39:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:46:15.771 19:39:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:46:15.771 19:39:31 -- common/autotest_common.sh@10 -- # set +x 00:46:16.029 ************************************ 00:46:16.029 START TEST dd_flags_misc_forced_aio 00:46:16.029 ************************************ 00:46:16.029 19:39:31 -- common/autotest_common.sh@1111 -- # io 00:46:16.029 19:39:31 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:46:16.029 19:39:31 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:46:16.029 19:39:31 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:46:16.029 19:39:31 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:46:16.029 19:39:31 -- dd/posix.sh@86 -- # gen_bytes 512 00:46:16.029 19:39:31 -- dd/common.sh@98 -- # xtrace_disable 00:46:16.029 19:39:31 -- common/autotest_common.sh@10 -- # set +x 00:46:16.029 19:39:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:46:16.029 19:39:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:46:16.029 [2024-04-18 19:39:31.811303] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:16.029 [2024-04-18 19:39:31.811542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147000 ] 00:46:16.288 [2024-04-18 19:39:31.999520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.547 [2024-04-18 19:39:32.274320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:18.211  Copying: 512/512 [B] (average 500 kBps) 00:46:18.211 00:46:18.211 19:39:33 -- dd/posix.sh@93 -- # [[ uaq1z8yofnao6fiywwsu1m9e4plm9gmu7mjay82nb5g06vwwogx503ercubpyuagvyini7whjmlexoisfsycxhbybk85qtgu1yj0xmjezqqg8nav1hyr3kwg8yb28yanm4dhf51sugazp2y7ds9hy3fwq3p2hzjawxzi82ys3kv1car1q2iie89od2bz7i1zcnrl0euhfbgzv1k3nw15duo4cu5mys9y3c32xvpcq1nwvd326fzmnqo6dnfnc7vamnn84i3p80dn3flijxsqcqwel8m5wj5az1t23dwoz43tx127d7bidzl21yr2e4iknkr78lp0ay9tyuqcw400ubnso7310yhiu9dhq5l39kyca3mnon7g93l48il1gsjszq1v1fkwkzjqt5pokk66f8ogiuyzqo0pnh8urv1knn4r33x1e0g8d5rg7oxhwj53xo30orf99feyhev1kb9acoty0xftlj4p9zdeldeg3q7eobhdteissh5x55wuf90d == \u\a\q\1\z\8\y\o\f\n\a\o\6\f\i\y\w\w\s\u\1\m\9\e\4\p\l\m\9\g\m\u\7\m\j\a\y\8\2\n\b\5\g\0\6\v\w\w\o\g\x\5\0\3\e\r\c\u\b\p\y\u\a\g\v\y\i\n\i\7\w\h\j\m\l\e\x\o\i\s\f\s\y\c\x\h\b\y\b\k\8\5\q\t\g\u\1\y\j\0\x\m\j\e\z\q\q\g\8\n\a\v\1\h\y\r\3\k\w\g\8\y\b\2\8\y\a\n\m\4\d\h\f\5\1\s\u\g\a\z\p\2\y\7\d\s\9\h\y\3\f\w\q\3\p\2\h\z\j\a\w\x\z\i\8\2\y\s\3\k\v\1\c\a\r\1\q\2\i\i\e\8\9\o\d\2\b\z\7\i\1\z\c\n\r\l\0\e\u\h\f\b\g\z\v\1\k\3\n\w\1\5\d\u\o\4\c\u\5\m\y\s\9\y\3\c\3\2\x\v\p\c\q\1\n\w\v\d\3\2\6\f\z\m\n\q\o\6\d\n\f\n\c\7\v\a\m\n\n\8\4\i\3\p\8\0\d\n\3\f\l\i\j\x\s\q\c\q\w\e\l\8\m\5\w\j\5\a\z\1\t\2\3\d\w\o\z\4\3\t\x\1\2\7\d\7\b\i\d\z\l\2\1\y\r\2\e\4\i\k\n\k\r\7\8\l\p\0\a\y\9\t\y\u\q\c\w\4\0\0\u\b\n\s\o\7\3\1\0\y\h\i\u\9\d\h\q\5\l\3\9\k\y\c\a\3\m\n\o\n\7\g\9\3\l\4\8\i\l\1\g\s\j\s\z\q\1\v\1\f\k\w\k\z\j\q\t\5\p\o\k\k\6\6\f\8\o\g\i\u\y\z\q\o\0\p\n\h\8\u\r\v\1\k\n\n\4\r\3\3\x\1\e\0\g\8\d\5\r\g\7\o\x\h\w\j\5\3\x\o\3\0\o\r\f\9\9\f\e\y\h\e\v\1\k\b\9\a\c\o\t\y\0\x\f\t\l\j\4\p\9\z\d\e\l\d\e\g\3\q\7\e\o\b\h\d\t\e\i\s\s\h\5\x\5\5\w\u\f\9\0\d ]] 00:46:18.211 19:39:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:46:18.211 19:39:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:46:18.211 [2024-04-18 19:39:34.070849] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:18.211 [2024-04-18 19:39:34.071048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147059 ] 00:46:18.469 [2024-04-18 19:39:34.255331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:18.727 [2024-04-18 19:39:34.522737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:20.364  Copying: 512/512 [B] (average 500 kBps) 00:46:20.364 00:46:20.364 19:39:36 -- dd/posix.sh@93 -- # [[ uaq1z8yofnao6fiywwsu1m9e4plm9gmu7mjay82nb5g06vwwogx503ercubpyuagvyini7whjmlexoisfsycxhbybk85qtgu1yj0xmjezqqg8nav1hyr3kwg8yb28yanm4dhf51sugazp2y7ds9hy3fwq3p2hzjawxzi82ys3kv1car1q2iie89od2bz7i1zcnrl0euhfbgzv1k3nw15duo4cu5mys9y3c32xvpcq1nwvd326fzmnqo6dnfnc7vamnn84i3p80dn3flijxsqcqwel8m5wj5az1t23dwoz43tx127d7bidzl21yr2e4iknkr78lp0ay9tyuqcw400ubnso7310yhiu9dhq5l39kyca3mnon7g93l48il1gsjszq1v1fkwkzjqt5pokk66f8ogiuyzqo0pnh8urv1knn4r33x1e0g8d5rg7oxhwj53xo30orf99feyhev1kb9acoty0xftlj4p9zdeldeg3q7eobhdteissh5x55wuf90d == \u\a\q\1\z\8\y\o\f\n\a\o\6\f\i\y\w\w\s\u\1\m\9\e\4\p\l\m\9\g\m\u\7\m\j\a\y\8\2\n\b\5\g\0\6\v\w\w\o\g\x\5\0\3\e\r\c\u\b\p\y\u\a\g\v\y\i\n\i\7\w\h\j\m\l\e\x\o\i\s\f\s\y\c\x\h\b\y\b\k\8\5\q\t\g\u\1\y\j\0\x\m\j\e\z\q\q\g\8\n\a\v\1\h\y\r\3\k\w\g\8\y\b\2\8\y\a\n\m\4\d\h\f\5\1\s\u\g\a\z\p\2\y\7\d\s\9\h\y\3\f\w\q\3\p\2\h\z\j\a\w\x\z\i\8\2\y\s\3\k\v\1\c\a\r\1\q\2\i\i\e\8\9\o\d\2\b\z\7\i\1\z\c\n\r\l\0\e\u\h\f\b\g\z\v\1\k\3\n\w\1\5\d\u\o\4\c\u\5\m\y\s\9\y\3\c\3\2\x\v\p\c\q\1\n\w\v\d\3\2\6\f\z\m\n\q\o\6\d\n\f\n\c\7\v\a\m\n\n\8\4\i\3\p\8\0\d\n\3\f\l\i\j\x\s\q\c\q\w\e\l\8\m\5\w\j\5\a\z\1\t\2\3\d\w\o\z\4\3\t\x\1\2\7\d\7\b\i\d\z\l\2\1\y\r\2\e\4\i\k\n\k\r\7\8\l\p\0\a\y\9\t\y\u\q\c\w\4\0\0\u\b\n\s\o\7\3\1\0\y\h\i\u\9\d\h\q\5\l\3\9\k\y\c\a\3\m\n\o\n\7\g\9\3\l\4\8\i\l\1\g\s\j\s\z\q\1\v\1\f\k\w\k\z\j\q\t\5\p\o\k\k\6\6\f\8\o\g\i\u\y\z\q\o\0\p\n\h\8\u\r\v\1\k\n\n\4\r\3\3\x\1\e\0\g\8\d\5\r\g\7\o\x\h\w\j\5\3\x\o\3\0\o\r\f\9\9\f\e\y\h\e\v\1\k\b\9\a\c\o\t\y\0\x\f\t\l\j\4\p\9\z\d\e\l\d\e\g\3\q\7\e\o\b\h\d\t\e\i\s\s\h\5\x\5\5\w\u\f\9\0\d ]] 00:46:20.364 19:39:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:46:20.364 19:39:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:46:20.364 [2024-04-18 19:39:36.279070] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:20.364 [2024-04-18 19:39:36.279225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147084 ] 00:46:20.623 [2024-04-18 19:39:36.442078] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:20.881 [2024-04-18 19:39:36.654940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:22.513  Copying: 512/512 [B] (average 100 kBps) 00:46:22.514 00:46:22.772 19:39:38 -- dd/posix.sh@93 -- # [[ uaq1z8yofnao6fiywwsu1m9e4plm9gmu7mjay82nb5g06vwwogx503ercubpyuagvyini7whjmlexoisfsycxhbybk85qtgu1yj0xmjezqqg8nav1hyr3kwg8yb28yanm4dhf51sugazp2y7ds9hy3fwq3p2hzjawxzi82ys3kv1car1q2iie89od2bz7i1zcnrl0euhfbgzv1k3nw15duo4cu5mys9y3c32xvpcq1nwvd326fzmnqo6dnfnc7vamnn84i3p80dn3flijxsqcqwel8m5wj5az1t23dwoz43tx127d7bidzl21yr2e4iknkr78lp0ay9tyuqcw400ubnso7310yhiu9dhq5l39kyca3mnon7g93l48il1gsjszq1v1fkwkzjqt5pokk66f8ogiuyzqo0pnh8urv1knn4r33x1e0g8d5rg7oxhwj53xo30orf99feyhev1kb9acoty0xftlj4p9zdeldeg3q7eobhdteissh5x55wuf90d == \u\a\q\1\z\8\y\o\f\n\a\o\6\f\i\y\w\w\s\u\1\m\9\e\4\p\l\m\9\g\m\u\7\m\j\a\y\8\2\n\b\5\g\0\6\v\w\w\o\g\x\5\0\3\e\r\c\u\b\p\y\u\a\g\v\y\i\n\i\7\w\h\j\m\l\e\x\o\i\s\f\s\y\c\x\h\b\y\b\k\8\5\q\t\g\u\1\y\j\0\x\m\j\e\z\q\q\g\8\n\a\v\1\h\y\r\3\k\w\g\8\y\b\2\8\y\a\n\m\4\d\h\f\5\1\s\u\g\a\z\p\2\y\7\d\s\9\h\y\3\f\w\q\3\p\2\h\z\j\a\w\x\z\i\8\2\y\s\3\k\v\1\c\a\r\1\q\2\i\i\e\8\9\o\d\2\b\z\7\i\1\z\c\n\r\l\0\e\u\h\f\b\g\z\v\1\k\3\n\w\1\5\d\u\o\4\c\u\5\m\y\s\9\y\3\c\3\2\x\v\p\c\q\1\n\w\v\d\3\2\6\f\z\m\n\q\o\6\d\n\f\n\c\7\v\a\m\n\n\8\4\i\3\p\8\0\d\n\3\f\l\i\j\x\s\q\c\q\w\e\l\8\m\5\w\j\5\a\z\1\t\2\3\d\w\o\z\4\3\t\x\1\2\7\d\7\b\i\d\z\l\2\1\y\r\2\e\4\i\k\n\k\r\7\8\l\p\0\a\y\9\t\y\u\q\c\w\4\0\0\u\b\n\s\o\7\3\1\0\y\h\i\u\9\d\h\q\5\l\3\9\k\y\c\a\3\m\n\o\n\7\g\9\3\l\4\8\i\l\1\g\s\j\s\z\q\1\v\1\f\k\w\k\z\j\q\t\5\p\o\k\k\6\6\f\8\o\g\i\u\y\z\q\o\0\p\n\h\8\u\r\v\1\k\n\n\4\r\3\3\x\1\e\0\g\8\d\5\r\g\7\o\x\h\w\j\5\3\x\o\3\0\o\r\f\9\9\f\e\y\h\e\v\1\k\b\9\a\c\o\t\y\0\x\f\t\l\j\4\p\9\z\d\e\l\d\e\g\3\q\7\e\o\b\h\d\t\e\i\s\s\h\5\x\5\5\w\u\f\9\0\d ]] 00:46:22.772 19:39:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:46:22.772 19:39:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:46:22.772 [2024-04-18 19:39:38.517220] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:22.772 [2024-04-18 19:39:38.517403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147113 ] 00:46:23.030 [2024-04-18 19:39:38.703737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:23.288 [2024-04-18 19:39:38.967758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:24.937  Copying: 512/512 [B] (average 250 kBps) 00:46:24.937 00:46:24.937 19:39:40 -- dd/posix.sh@93 -- # [[ uaq1z8yofnao6fiywwsu1m9e4plm9gmu7mjay82nb5g06vwwogx503ercubpyuagvyini7whjmlexoisfsycxhbybk85qtgu1yj0xmjezqqg8nav1hyr3kwg8yb28yanm4dhf51sugazp2y7ds9hy3fwq3p2hzjawxzi82ys3kv1car1q2iie89od2bz7i1zcnrl0euhfbgzv1k3nw15duo4cu5mys9y3c32xvpcq1nwvd326fzmnqo6dnfnc7vamnn84i3p80dn3flijxsqcqwel8m5wj5az1t23dwoz43tx127d7bidzl21yr2e4iknkr78lp0ay9tyuqcw400ubnso7310yhiu9dhq5l39kyca3mnon7g93l48il1gsjszq1v1fkwkzjqt5pokk66f8ogiuyzqo0pnh8urv1knn4r33x1e0g8d5rg7oxhwj53xo30orf99feyhev1kb9acoty0xftlj4p9zdeldeg3q7eobhdteissh5x55wuf90d == \u\a\q\1\z\8\y\o\f\n\a\o\6\f\i\y\w\w\s\u\1\m\9\e\4\p\l\m\9\g\m\u\7\m\j\a\y\8\2\n\b\5\g\0\6\v\w\w\o\g\x\5\0\3\e\r\c\u\b\p\y\u\a\g\v\y\i\n\i\7\w\h\j\m\l\e\x\o\i\s\f\s\y\c\x\h\b\y\b\k\8\5\q\t\g\u\1\y\j\0\x\m\j\e\z\q\q\g\8\n\a\v\1\h\y\r\3\k\w\g\8\y\b\2\8\y\a\n\m\4\d\h\f\5\1\s\u\g\a\z\p\2\y\7\d\s\9\h\y\3\f\w\q\3\p\2\h\z\j\a\w\x\z\i\8\2\y\s\3\k\v\1\c\a\r\1\q\2\i\i\e\8\9\o\d\2\b\z\7\i\1\z\c\n\r\l\0\e\u\h\f\b\g\z\v\1\k\3\n\w\1\5\d\u\o\4\c\u\5\m\y\s\9\y\3\c\3\2\x\v\p\c\q\1\n\w\v\d\3\2\6\f\z\m\n\q\o\6\d\n\f\n\c\7\v\a\m\n\n\8\4\i\3\p\8\0\d\n\3\f\l\i\j\x\s\q\c\q\w\e\l\8\m\5\w\j\5\a\z\1\t\2\3\d\w\o\z\4\3\t\x\1\2\7\d\7\b\i\d\z\l\2\1\y\r\2\e\4\i\k\n\k\r\7\8\l\p\0\a\y\9\t\y\u\q\c\w\4\0\0\u\b\n\s\o\7\3\1\0\y\h\i\u\9\d\h\q\5\l\3\9\k\y\c\a\3\m\n\o\n\7\g\9\3\l\4\8\i\l\1\g\s\j\s\z\q\1\v\1\f\k\w\k\z\j\q\t\5\p\o\k\k\6\6\f\8\o\g\i\u\y\z\q\o\0\p\n\h\8\u\r\v\1\k\n\n\4\r\3\3\x\1\e\0\g\8\d\5\r\g\7\o\x\h\w\j\5\3\x\o\3\0\o\r\f\9\9\f\e\y\h\e\v\1\k\b\9\a\c\o\t\y\0\x\f\t\l\j\4\p\9\z\d\e\l\d\e\g\3\q\7\e\o\b\h\d\t\e\i\s\s\h\5\x\5\5\w\u\f\9\0\d ]] 00:46:24.937 19:39:40 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:46:24.937 19:39:40 -- dd/posix.sh@86 -- # gen_bytes 512 00:46:24.937 19:39:40 -- dd/common.sh@98 -- # xtrace_disable 00:46:24.937 19:39:40 -- common/autotest_common.sh@10 -- # set +x 00:46:24.937 19:39:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:46:24.937 19:39:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:46:24.937 [2024-04-18 19:39:40.717487] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:24.937 [2024-04-18 19:39:40.717643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147137 ] 00:46:25.196 [2024-04-18 19:39:40.879366] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:25.196 [2024-04-18 19:39:41.091598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:27.137  Copying: 512/512 [B] (average 500 kBps) 00:46:27.137 00:46:27.137 19:39:42 -- dd/posix.sh@93 -- # [[ d6jz1fb4e1o6glhdprkvdpxsfcw2frdz2mllxienewb1kxylx9rv621eayzk0nwkfhckqmxs0b3311ckvyg3r8he7henlpxnpvu3lyrk8hsws57pt6gt8hnnzzu1etqky0dpeuvv8gou7v1usdi00ntde6hk2djzp7qnqiskouspoi7gsdal6tac5m4uhaftqeg1d4eq2rbswg2nryimtyat76dni1q52gv15dyo6ca8cn6gbgp1nmyo7fdzhsk9yilx6wk3jwcky7f1ikxfoocjco1cuwbsn218i7pceiqjrignafoiuk7sa8toxqczudxko3vefttvtpkc1vqidbprbh3t3r0scx4ll0dom24qwqgvgxw39aunof6ch7vhygp2uqipdttl82a9e72plhkwu98qighjyce154wrx3iaus6b0o22rlw8dfrjrcbn8t9e1nxqwm4r2g7adkcs4elm7f8d5zlpm3w4ipvdgeifnxrw6tvv5gdyft6m9ipd == \d\6\j\z\1\f\b\4\e\1\o\6\g\l\h\d\p\r\k\v\d\p\x\s\f\c\w\2\f\r\d\z\2\m\l\l\x\i\e\n\e\w\b\1\k\x\y\l\x\9\r\v\6\2\1\e\a\y\z\k\0\n\w\k\f\h\c\k\q\m\x\s\0\b\3\3\1\1\c\k\v\y\g\3\r\8\h\e\7\h\e\n\l\p\x\n\p\v\u\3\l\y\r\k\8\h\s\w\s\5\7\p\t\6\g\t\8\h\n\n\z\z\u\1\e\t\q\k\y\0\d\p\e\u\v\v\8\g\o\u\7\v\1\u\s\d\i\0\0\n\t\d\e\6\h\k\2\d\j\z\p\7\q\n\q\i\s\k\o\u\s\p\o\i\7\g\s\d\a\l\6\t\a\c\5\m\4\u\h\a\f\t\q\e\g\1\d\4\e\q\2\r\b\s\w\g\2\n\r\y\i\m\t\y\a\t\7\6\d\n\i\1\q\5\2\g\v\1\5\d\y\o\6\c\a\8\c\n\6\g\b\g\p\1\n\m\y\o\7\f\d\z\h\s\k\9\y\i\l\x\6\w\k\3\j\w\c\k\y\7\f\1\i\k\x\f\o\o\c\j\c\o\1\c\u\w\b\s\n\2\1\8\i\7\p\c\e\i\q\j\r\i\g\n\a\f\o\i\u\k\7\s\a\8\t\o\x\q\c\z\u\d\x\k\o\3\v\e\f\t\t\v\t\p\k\c\1\v\q\i\d\b\p\r\b\h\3\t\3\r\0\s\c\x\4\l\l\0\d\o\m\2\4\q\w\q\g\v\g\x\w\3\9\a\u\n\o\f\6\c\h\7\v\h\y\g\p\2\u\q\i\p\d\t\t\l\8\2\a\9\e\7\2\p\l\h\k\w\u\9\8\q\i\g\h\j\y\c\e\1\5\4\w\r\x\3\i\a\u\s\6\b\0\o\2\2\r\l\w\8\d\f\r\j\r\c\b\n\8\t\9\e\1\n\x\q\w\m\4\r\2\g\7\a\d\k\c\s\4\e\l\m\7\f\8\d\5\z\l\p\m\3\w\4\i\p\v\d\g\e\i\f\n\x\r\w\6\t\v\v\5\g\d\y\f\t\6\m\9\i\p\d ]] 00:46:27.137 19:39:42 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:46:27.137 19:39:42 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:46:27.137 [2024-04-18 19:39:42.841535] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:27.137 [2024-04-18 19:39:42.841750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147184 ] 00:46:27.137 [2024-04-18 19:39:43.028605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:27.395 [2024-04-18 19:39:43.234661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:29.028  Copying: 512/512 [B] (average 500 kBps) 00:46:29.028 00:46:29.028 19:39:44 -- dd/posix.sh@93 -- # [[ d6jz1fb4e1o6glhdprkvdpxsfcw2frdz2mllxienewb1kxylx9rv621eayzk0nwkfhckqmxs0b3311ckvyg3r8he7henlpxnpvu3lyrk8hsws57pt6gt8hnnzzu1etqky0dpeuvv8gou7v1usdi00ntde6hk2djzp7qnqiskouspoi7gsdal6tac5m4uhaftqeg1d4eq2rbswg2nryimtyat76dni1q52gv15dyo6ca8cn6gbgp1nmyo7fdzhsk9yilx6wk3jwcky7f1ikxfoocjco1cuwbsn218i7pceiqjrignafoiuk7sa8toxqczudxko3vefttvtpkc1vqidbprbh3t3r0scx4ll0dom24qwqgvgxw39aunof6ch7vhygp2uqipdttl82a9e72plhkwu98qighjyce154wrx3iaus6b0o22rlw8dfrjrcbn8t9e1nxqwm4r2g7adkcs4elm7f8d5zlpm3w4ipvdgeifnxrw6tvv5gdyft6m9ipd == \d\6\j\z\1\f\b\4\e\1\o\6\g\l\h\d\p\r\k\v\d\p\x\s\f\c\w\2\f\r\d\z\2\m\l\l\x\i\e\n\e\w\b\1\k\x\y\l\x\9\r\v\6\2\1\e\a\y\z\k\0\n\w\k\f\h\c\k\q\m\x\s\0\b\3\3\1\1\c\k\v\y\g\3\r\8\h\e\7\h\e\n\l\p\x\n\p\v\u\3\l\y\r\k\8\h\s\w\s\5\7\p\t\6\g\t\8\h\n\n\z\z\u\1\e\t\q\k\y\0\d\p\e\u\v\v\8\g\o\u\7\v\1\u\s\d\i\0\0\n\t\d\e\6\h\k\2\d\j\z\p\7\q\n\q\i\s\k\o\u\s\p\o\i\7\g\s\d\a\l\6\t\a\c\5\m\4\u\h\a\f\t\q\e\g\1\d\4\e\q\2\r\b\s\w\g\2\n\r\y\i\m\t\y\a\t\7\6\d\n\i\1\q\5\2\g\v\1\5\d\y\o\6\c\a\8\c\n\6\g\b\g\p\1\n\m\y\o\7\f\d\z\h\s\k\9\y\i\l\x\6\w\k\3\j\w\c\k\y\7\f\1\i\k\x\f\o\o\c\j\c\o\1\c\u\w\b\s\n\2\1\8\i\7\p\c\e\i\q\j\r\i\g\n\a\f\o\i\u\k\7\s\a\8\t\o\x\q\c\z\u\d\x\k\o\3\v\e\f\t\t\v\t\p\k\c\1\v\q\i\d\b\p\r\b\h\3\t\3\r\0\s\c\x\4\l\l\0\d\o\m\2\4\q\w\q\g\v\g\x\w\3\9\a\u\n\o\f\6\c\h\7\v\h\y\g\p\2\u\q\i\p\d\t\t\l\8\2\a\9\e\7\2\p\l\h\k\w\u\9\8\q\i\g\h\j\y\c\e\1\5\4\w\r\x\3\i\a\u\s\6\b\0\o\2\2\r\l\w\8\d\f\r\j\r\c\b\n\8\t\9\e\1\n\x\q\w\m\4\r\2\g\7\a\d\k\c\s\4\e\l\m\7\f\8\d\5\z\l\p\m\3\w\4\i\p\v\d\g\e\i\f\n\x\r\w\6\t\v\v\5\g\d\y\f\t\6\m\9\i\p\d ]] 00:46:29.028 19:39:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:46:29.028 19:39:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:46:29.313 [2024-04-18 19:39:44.974359] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:29.313 [2024-04-18 19:39:44.974516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147212 ] 00:46:29.313 [2024-04-18 19:39:45.139109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:29.571 [2024-04-18 19:39:45.356947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:31.205  Copying: 512/512 [B] (average 250 kBps) 00:46:31.205 00:46:31.205 19:39:47 -- dd/posix.sh@93 -- # [[ d6jz1fb4e1o6glhdprkvdpxsfcw2frdz2mllxienewb1kxylx9rv621eayzk0nwkfhckqmxs0b3311ckvyg3r8he7henlpxnpvu3lyrk8hsws57pt6gt8hnnzzu1etqky0dpeuvv8gou7v1usdi00ntde6hk2djzp7qnqiskouspoi7gsdal6tac5m4uhaftqeg1d4eq2rbswg2nryimtyat76dni1q52gv15dyo6ca8cn6gbgp1nmyo7fdzhsk9yilx6wk3jwcky7f1ikxfoocjco1cuwbsn218i7pceiqjrignafoiuk7sa8toxqczudxko3vefttvtpkc1vqidbprbh3t3r0scx4ll0dom24qwqgvgxw39aunof6ch7vhygp2uqipdttl82a9e72plhkwu98qighjyce154wrx3iaus6b0o22rlw8dfrjrcbn8t9e1nxqwm4r2g7adkcs4elm7f8d5zlpm3w4ipvdgeifnxrw6tvv5gdyft6m9ipd == \d\6\j\z\1\f\b\4\e\1\o\6\g\l\h\d\p\r\k\v\d\p\x\s\f\c\w\2\f\r\d\z\2\m\l\l\x\i\e\n\e\w\b\1\k\x\y\l\x\9\r\v\6\2\1\e\a\y\z\k\0\n\w\k\f\h\c\k\q\m\x\s\0\b\3\3\1\1\c\k\v\y\g\3\r\8\h\e\7\h\e\n\l\p\x\n\p\v\u\3\l\y\r\k\8\h\s\w\s\5\7\p\t\6\g\t\8\h\n\n\z\z\u\1\e\t\q\k\y\0\d\p\e\u\v\v\8\g\o\u\7\v\1\u\s\d\i\0\0\n\t\d\e\6\h\k\2\d\j\z\p\7\q\n\q\i\s\k\o\u\s\p\o\i\7\g\s\d\a\l\6\t\a\c\5\m\4\u\h\a\f\t\q\e\g\1\d\4\e\q\2\r\b\s\w\g\2\n\r\y\i\m\t\y\a\t\7\6\d\n\i\1\q\5\2\g\v\1\5\d\y\o\6\c\a\8\c\n\6\g\b\g\p\1\n\m\y\o\7\f\d\z\h\s\k\9\y\i\l\x\6\w\k\3\j\w\c\k\y\7\f\1\i\k\x\f\o\o\c\j\c\o\1\c\u\w\b\s\n\2\1\8\i\7\p\c\e\i\q\j\r\i\g\n\a\f\o\i\u\k\7\s\a\8\t\o\x\q\c\z\u\d\x\k\o\3\v\e\f\t\t\v\t\p\k\c\1\v\q\i\d\b\p\r\b\h\3\t\3\r\0\s\c\x\4\l\l\0\d\o\m\2\4\q\w\q\g\v\g\x\w\3\9\a\u\n\o\f\6\c\h\7\v\h\y\g\p\2\u\q\i\p\d\t\t\l\8\2\a\9\e\7\2\p\l\h\k\w\u\9\8\q\i\g\h\j\y\c\e\1\5\4\w\r\x\3\i\a\u\s\6\b\0\o\2\2\r\l\w\8\d\f\r\j\r\c\b\n\8\t\9\e\1\n\x\q\w\m\4\r\2\g\7\a\d\k\c\s\4\e\l\m\7\f\8\d\5\z\l\p\m\3\w\4\i\p\v\d\g\e\i\f\n\x\r\w\6\t\v\v\5\g\d\y\f\t\6\m\9\i\p\d ]] 00:46:31.205 19:39:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:46:31.205 19:39:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:46:31.462 [2024-04-18 19:39:47.153355] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:31.462 [2024-04-18 19:39:47.154253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147237 ] 00:46:31.462 [2024-04-18 19:39:47.336086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:31.723 [2024-04-18 19:39:47.547263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:33.359  Copying: 512/512 [B] (average 250 kBps) 00:46:33.359 00:46:33.359 ************************************ 00:46:33.359 END TEST dd_flags_misc_forced_aio 00:46:33.359 ************************************ 00:46:33.359 19:39:49 -- dd/posix.sh@93 -- # [[ d6jz1fb4e1o6glhdprkvdpxsfcw2frdz2mllxienewb1kxylx9rv621eayzk0nwkfhckqmxs0b3311ckvyg3r8he7henlpxnpvu3lyrk8hsws57pt6gt8hnnzzu1etqky0dpeuvv8gou7v1usdi00ntde6hk2djzp7qnqiskouspoi7gsdal6tac5m4uhaftqeg1d4eq2rbswg2nryimtyat76dni1q52gv15dyo6ca8cn6gbgp1nmyo7fdzhsk9yilx6wk3jwcky7f1ikxfoocjco1cuwbsn218i7pceiqjrignafoiuk7sa8toxqczudxko3vefttvtpkc1vqidbprbh3t3r0scx4ll0dom24qwqgvgxw39aunof6ch7vhygp2uqipdttl82a9e72plhkwu98qighjyce154wrx3iaus6b0o22rlw8dfrjrcbn8t9e1nxqwm4r2g7adkcs4elm7f8d5zlpm3w4ipvdgeifnxrw6tvv5gdyft6m9ipd == \d\6\j\z\1\f\b\4\e\1\o\6\g\l\h\d\p\r\k\v\d\p\x\s\f\c\w\2\f\r\d\z\2\m\l\l\x\i\e\n\e\w\b\1\k\x\y\l\x\9\r\v\6\2\1\e\a\y\z\k\0\n\w\k\f\h\c\k\q\m\x\s\0\b\3\3\1\1\c\k\v\y\g\3\r\8\h\e\7\h\e\n\l\p\x\n\p\v\u\3\l\y\r\k\8\h\s\w\s\5\7\p\t\6\g\t\8\h\n\n\z\z\u\1\e\t\q\k\y\0\d\p\e\u\v\v\8\g\o\u\7\v\1\u\s\d\i\0\0\n\t\d\e\6\h\k\2\d\j\z\p\7\q\n\q\i\s\k\o\u\s\p\o\i\7\g\s\d\a\l\6\t\a\c\5\m\4\u\h\a\f\t\q\e\g\1\d\4\e\q\2\r\b\s\w\g\2\n\r\y\i\m\t\y\a\t\7\6\d\n\i\1\q\5\2\g\v\1\5\d\y\o\6\c\a\8\c\n\6\g\b\g\p\1\n\m\y\o\7\f\d\z\h\s\k\9\y\i\l\x\6\w\k\3\j\w\c\k\y\7\f\1\i\k\x\f\o\o\c\j\c\o\1\c\u\w\b\s\n\2\1\8\i\7\p\c\e\i\q\j\r\i\g\n\a\f\o\i\u\k\7\s\a\8\t\o\x\q\c\z\u\d\x\k\o\3\v\e\f\t\t\v\t\p\k\c\1\v\q\i\d\b\p\r\b\h\3\t\3\r\0\s\c\x\4\l\l\0\d\o\m\2\4\q\w\q\g\v\g\x\w\3\9\a\u\n\o\f\6\c\h\7\v\h\y\g\p\2\u\q\i\p\d\t\t\l\8\2\a\9\e\7\2\p\l\h\k\w\u\9\8\q\i\g\h\j\y\c\e\1\5\4\w\r\x\3\i\a\u\s\6\b\0\o\2\2\r\l\w\8\d\f\r\j\r\c\b\n\8\t\9\e\1\n\x\q\w\m\4\r\2\g\7\a\d\k\c\s\4\e\l\m\7\f\8\d\5\z\l\p\m\3\w\4\i\p\v\d\g\e\i\f\n\x\r\w\6\t\v\v\5\g\d\y\f\t\6\m\9\i\p\d ]] 00:46:33.359 00:46:33.359 real 0m17.533s 00:46:33.359 user 0m14.627s 00:46:33.359 sys 0m1.840s 00:46:33.359 19:39:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:46:33.359 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:46:33.618 19:39:49 -- dd/posix.sh@1 -- # cleanup 00:46:33.618 19:39:49 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:46:33.618 19:39:49 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:46:33.618 00:46:33.618 real 1m14.337s 00:46:33.618 user 1m0.367s 00:46:33.618 sys 0m7.965s 00:46:33.618 ************************************ 00:46:33.618 END TEST spdk_dd_posix 00:46:33.618 ************************************ 00:46:33.618 19:39:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:46:33.618 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:46:33.618 19:39:49 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:46:33.618 19:39:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:46:33.618 19:39:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:46:33.618 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:46:33.618 ************************************ 00:46:33.618 START TEST spdk_dd_malloc 00:46:33.618 ************************************ 00:46:33.618 19:39:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:46:33.619 * Looking for test storage... 00:46:33.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:46:33.619 19:39:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:33.619 19:39:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:33.619 19:39:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:33.619 19:39:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:33.619 19:39:49 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:33.619 19:39:49 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:33.619 19:39:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:33.619 19:39:49 -- paths/export.sh@5 -- # export PATH 00:46:33.619 19:39:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:33.619 19:39:49 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:46:33.619 19:39:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:46:33.619 19:39:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:46:33.619 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:46:33.619 ************************************ 00:46:33.619 START TEST dd_malloc_copy 00:46:33.619 ************************************ 00:46:33.619 19:39:49 -- common/autotest_common.sh@1111 -- # malloc_copy 00:46:33.619 19:39:49 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:46:33.619 19:39:49 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:46:33.619 19:39:49 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:46:33.619 19:39:49 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:46:33.619 19:39:49 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:46:33.619 19:39:49 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:46:33.619 19:39:49 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:46:33.619 19:39:49 -- dd/malloc.sh@28 -- # gen_conf 00:46:33.619 19:39:49 -- dd/common.sh@31 -- # xtrace_disable 00:46:33.619 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:46:33.878 { 00:46:33.878 "subsystems": [ 00:46:33.878 { 00:46:33.878 "subsystem": "bdev", 00:46:33.878 "config": [ 00:46:33.878 { 00:46:33.878 "params": { 00:46:33.878 "num_blocks": 1048576, 00:46:33.878 "block_size": 512, 00:46:33.878 "name": "malloc0" 00:46:33.878 }, 00:46:33.878 "method": "bdev_malloc_create" 00:46:33.878 }, 00:46:33.878 { 00:46:33.878 "params": { 00:46:33.878 "num_blocks": 1048576, 00:46:33.878 "block_size": 512, 00:46:33.878 "name": "malloc1" 00:46:33.878 }, 00:46:33.878 "method": "bdev_malloc_create" 00:46:33.878 }, 00:46:33.878 { 00:46:33.878 "method": "bdev_wait_for_examine" 00:46:33.878 } 00:46:33.878 ] 00:46:33.878 } 00:46:33.878 ] 00:46:33.878 } 00:46:33.878 [2024-04-18 19:39:49.587721] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:33.878 [2024-04-18 19:39:49.587988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147349 ] 00:46:33.878 [2024-04-18 19:39:49.752234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:34.137 [2024-04-18 19:39:49.953114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:42.448  Copying: 216/512 [MB] (216 MBps) Copying: 428/512 [MB] (212 MBps) Copying: 512/512 [MB] (average 215 MBps) 00:46:42.448 00:46:42.448 19:39:57 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:46:42.448 19:39:57 -- dd/malloc.sh@33 -- # gen_conf 00:46:42.448 19:39:57 -- dd/common.sh@31 -- # xtrace_disable 00:46:42.448 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:46:42.448 { 00:46:42.448 "subsystems": [ 00:46:42.448 { 00:46:42.448 "subsystem": "bdev", 00:46:42.448 "config": [ 00:46:42.448 { 00:46:42.448 "params": { 00:46:42.448 "num_blocks": 1048576, 00:46:42.448 "block_size": 512, 00:46:42.448 "name": "malloc0" 00:46:42.448 }, 00:46:42.448 "method": "bdev_malloc_create" 00:46:42.448 }, 00:46:42.448 { 00:46:42.448 "params": { 00:46:42.448 "num_blocks": 1048576, 00:46:42.448 "block_size": 512, 00:46:42.448 "name": "malloc1" 00:46:42.448 }, 00:46:42.448 "method": "bdev_malloc_create" 00:46:42.448 }, 00:46:42.448 { 00:46:42.448 "method": "bdev_wait_for_examine" 00:46:42.448 } 00:46:42.448 ] 00:46:42.448 } 00:46:42.448 ] 00:46:42.448 } 00:46:42.448 [2024-04-18 19:39:58.031298] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:42.448 [2024-04-18 19:39:58.031456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147466 ] 00:46:42.448 [2024-04-18 19:39:58.192694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:42.708 [2024-04-18 19:39:58.391509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:51.707  Copying: 204/512 [MB] (204 MBps) Copying: 412/512 [MB] (208 MBps) Copying: 512/512 [MB] (average 206 MBps) 00:46:51.707 00:46:51.707 00:46:51.707 real 0m17.056s 00:46:51.707 user 0m15.931s 00:46:51.707 sys 0m0.971s 00:46:51.707 ************************************ 00:46:51.707 19:40:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:46:51.707 19:40:06 -- common/autotest_common.sh@10 -- # set +x 00:46:51.707 END TEST dd_malloc_copy 00:46:51.707 ************************************ 00:46:51.707 ************************************ 00:46:51.707 END TEST spdk_dd_malloc 00:46:51.707 ************************************ 00:46:51.707 00:46:51.707 real 0m17.235s 00:46:51.707 user 0m16.032s 00:46:51.707 sys 0m1.059s 00:46:51.707 19:40:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:46:51.707 19:40:06 -- common/autotest_common.sh@10 -- # set +x 00:46:51.707 19:40:06 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:46:51.707 19:40:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:46:51.707 19:40:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:46:51.707 19:40:06 -- common/autotest_common.sh@10 -- # set +x 00:46:51.707 ************************************ 00:46:51.707 START TEST spdk_dd_bdev_to_bdev 00:46:51.707 ************************************ 00:46:51.707 19:40:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:46:51.707 * Looking for test storage... 00:46:51.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:46:51.707 19:40:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:51.708 19:40:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:51.708 19:40:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:51.708 19:40:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:51.708 19:40:06 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:51.708 19:40:06 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:51.708 19:40:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:51.708 19:40:06 -- paths/export.sh@5 -- # export PATH 00:46:51.708 19:40:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:46:51.708 19:40:06 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:46:51.708 [2024-04-18 19:40:06.892039] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:51.708 [2024-04-18 19:40:06.892378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147662 ] 00:46:51.708 [2024-04-18 19:40:07.052937] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:51.708 [2024-04-18 19:40:07.267562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:53.358  Copying: 256/256 [MB] (average 1158 MBps) 00:46:53.358 00:46:53.358 19:40:09 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:53.358 19:40:09 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:53.358 19:40:09 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:46:53.358 19:40:09 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:46:53.358 19:40:09 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:46:53.358 19:40:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:46:53.358 19:40:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:46:53.358 19:40:09 -- common/autotest_common.sh@10 -- # set +x 00:46:53.358 ************************************ 00:46:53.358 START TEST dd_inflate_file 00:46:53.358 ************************************ 00:46:53.358 19:40:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:46:53.358 [2024-04-18 19:40:09.271727] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:53.358 [2024-04-18 19:40:09.271915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147701 ] 00:46:53.617 [2024-04-18 19:40:09.434476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:53.874 [2024-04-18 19:40:09.648464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:55.521  Copying: 64/64 [MB] (average 1142 MBps) 00:46:55.521 00:46:55.521 00:46:55.521 real 0m2.204s 00:46:55.521 user 0m1.815s 00:46:55.521 sys 0m0.260s 00:46:55.521 19:40:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:46:55.521 19:40:11 -- common/autotest_common.sh@10 -- # set +x 00:46:55.521 ************************************ 00:46:55.521 END TEST dd_inflate_file 00:46:55.521 ************************************ 00:46:55.788 19:40:11 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:46:55.788 19:40:11 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:46:55.788 19:40:11 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:46:55.788 19:40:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:46:55.788 19:40:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:46:55.788 19:40:11 -- common/autotest_common.sh@10 -- # set +x 00:46:55.788 19:40:11 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:46:55.788 19:40:11 -- dd/common.sh@31 -- # xtrace_disable 00:46:55.788 19:40:11 -- common/autotest_common.sh@10 -- # set +x 00:46:55.788 ************************************ 00:46:55.788 START TEST dd_copy_to_out_bdev 00:46:55.788 ************************************ 00:46:55.789 19:40:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:46:55.789 { 00:46:55.789 "subsystems": [ 00:46:55.789 { 00:46:55.789 "subsystem": "bdev", 00:46:55.789 "config": [ 00:46:55.789 { 00:46:55.789 "params": { 00:46:55.789 "block_size": 4096, 00:46:55.789 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:46:55.789 "name": "aio1" 00:46:55.789 }, 00:46:55.789 "method": "bdev_aio_create" 00:46:55.789 }, 00:46:55.789 { 00:46:55.789 "params": { 00:46:55.789 "trtype": "pcie", 00:46:55.789 "traddr": "0000:00:10.0", 00:46:55.789 "name": "Nvme0" 00:46:55.789 }, 00:46:55.789 "method": "bdev_nvme_attach_controller" 00:46:55.789 }, 00:46:55.789 { 00:46:55.789 "method": "bdev_wait_for_examine" 00:46:55.789 } 00:46:55.789 ] 00:46:55.789 } 00:46:55.789 ] 00:46:55.789 } 00:46:55.789 [2024-04-18 19:40:11.593530] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:55.789 [2024-04-18 19:40:11.593744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147764 ] 00:46:56.060 [2024-04-18 19:40:11.770433] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:56.332 [2024-04-18 19:40:11.983470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:59.088  Copying: 64/64 [MB] (average 71 MBps) 00:46:59.088 00:46:59.088 00:46:59.088 real 0m3.238s 00:46:59.088 user 0m2.809s 00:46:59.088 sys 0m0.326s 00:46:59.088 19:40:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:46:59.088 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:46:59.088 ************************************ 00:46:59.088 END TEST dd_copy_to_out_bdev 00:46:59.088 ************************************ 00:46:59.088 19:40:14 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:46:59.088 19:40:14 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:46:59.088 19:40:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:46:59.088 19:40:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:46:59.088 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:46:59.088 ************************************ 00:46:59.088 START TEST dd_offset_magic 00:46:59.088 ************************************ 00:46:59.088 19:40:14 -- common/autotest_common.sh@1111 -- # offset_magic 00:46:59.088 19:40:14 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:46:59.088 19:40:14 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:46:59.088 19:40:14 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:46:59.088 19:40:14 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:46:59.088 19:40:14 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:46:59.088 19:40:14 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:46:59.088 19:40:14 -- dd/common.sh@31 -- # xtrace_disable 00:46:59.088 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:46:59.088 { 00:46:59.088 "subsystems": [ 00:46:59.088 { 00:46:59.088 "subsystem": "bdev", 00:46:59.088 "config": [ 00:46:59.088 { 00:46:59.088 "params": { 00:46:59.088 "block_size": 4096, 00:46:59.088 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:46:59.088 "name": "aio1" 00:46:59.088 }, 00:46:59.088 "method": "bdev_aio_create" 00:46:59.088 }, 00:46:59.088 { 00:46:59.088 "params": { 00:46:59.088 "trtype": "pcie", 00:46:59.088 "traddr": "0000:00:10.0", 00:46:59.088 "name": "Nvme0" 00:46:59.088 }, 00:46:59.088 "method": "bdev_nvme_attach_controller" 00:46:59.088 }, 00:46:59.088 { 00:46:59.088 "method": "bdev_wait_for_examine" 00:46:59.088 } 00:46:59.088 ] 00:46:59.088 } 00:46:59.088 ] 00:46:59.088 } 00:46:59.088 [2024-04-18 19:40:14.930115] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:46:59.089 [2024-04-18 19:40:14.930306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147850 ] 00:46:59.347 [2024-04-18 19:40:15.113776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:59.605 [2024-04-18 19:40:15.388546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:02.075  Copying: 65/65 [MB] (average 257 MBps) 00:47:02.075 00:47:02.075 19:40:17 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:47:02.075 19:40:17 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:47:02.075 19:40:17 -- dd/common.sh@31 -- # xtrace_disable 00:47:02.075 19:40:17 -- common/autotest_common.sh@10 -- # set +x 00:47:02.075 { 00:47:02.075 "subsystems": [ 00:47:02.075 { 00:47:02.075 "subsystem": "bdev", 00:47:02.075 "config": [ 00:47:02.075 { 00:47:02.075 "params": { 00:47:02.075 "block_size": 4096, 00:47:02.075 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:47:02.075 "name": "aio1" 00:47:02.075 }, 00:47:02.075 "method": "bdev_aio_create" 00:47:02.075 }, 00:47:02.075 { 00:47:02.075 "params": { 00:47:02.075 "trtype": "pcie", 00:47:02.075 "traddr": "0000:00:10.0", 00:47:02.075 "name": "Nvme0" 00:47:02.075 }, 00:47:02.075 "method": "bdev_nvme_attach_controller" 00:47:02.075 }, 00:47:02.075 { 00:47:02.075 "method": "bdev_wait_for_examine" 00:47:02.075 } 00:47:02.075 ] 00:47:02.075 } 00:47:02.075 ] 00:47:02.075 } 00:47:02.075 [2024-04-18 19:40:17.563013] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:02.075 [2024-04-18 19:40:17.563231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147888 ] 00:47:02.075 [2024-04-18 19:40:17.744098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:02.075 [2024-04-18 19:40:17.955346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:04.019  Copying: 1024/1024 [kB] (average 500 MBps) 00:47:04.019 00:47:04.019 19:40:19 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:47:04.019 19:40:19 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:47:04.019 19:40:19 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:47:04.019 19:40:19 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:47:04.019 19:40:19 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:47:04.019 19:40:19 -- dd/common.sh@31 -- # xtrace_disable 00:47:04.019 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:47:04.019 { 00:47:04.019 "subsystems": [ 00:47:04.019 { 00:47:04.019 "subsystem": "bdev", 00:47:04.019 "config": [ 00:47:04.019 { 00:47:04.019 "params": { 00:47:04.019 "block_size": 4096, 00:47:04.019 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:47:04.019 "name": "aio1" 00:47:04.019 }, 00:47:04.019 "method": "bdev_aio_create" 00:47:04.019 }, 00:47:04.019 { 00:47:04.019 "params": { 00:47:04.019 "trtype": "pcie", 00:47:04.019 "traddr": "0000:00:10.0", 00:47:04.019 "name": "Nvme0" 00:47:04.019 }, 00:47:04.019 "method": "bdev_nvme_attach_controller" 00:47:04.019 }, 00:47:04.019 { 00:47:04.019 "method": "bdev_wait_for_examine" 00:47:04.019 } 00:47:04.019 ] 00:47:04.019 } 00:47:04.019 ] 00:47:04.019 } 00:47:04.019 [2024-04-18 19:40:19.907907] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:04.019 [2024-04-18 19:40:19.908773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147926 ] 00:47:04.277 [2024-04-18 19:40:20.093604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:04.535 [2024-04-18 19:40:20.316908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:06.542  Copying: 65/65 [MB] (average 321 MBps) 00:47:06.542 00:47:06.542 19:40:22 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:47:06.542 19:40:22 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:47:06.542 19:40:22 -- dd/common.sh@31 -- # xtrace_disable 00:47:06.542 19:40:22 -- common/autotest_common.sh@10 -- # set +x 00:47:06.542 { 00:47:06.542 "subsystems": [ 00:47:06.542 { 00:47:06.542 "subsystem": "bdev", 00:47:06.542 "config": [ 00:47:06.542 { 00:47:06.542 "params": { 00:47:06.542 "block_size": 4096, 00:47:06.542 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:47:06.542 "name": "aio1" 00:47:06.542 }, 00:47:06.542 "method": "bdev_aio_create" 00:47:06.542 }, 00:47:06.542 { 00:47:06.542 "params": { 00:47:06.542 "trtype": "pcie", 00:47:06.542 "traddr": "0000:00:10.0", 00:47:06.542 "name": "Nvme0" 00:47:06.542 }, 00:47:06.542 "method": "bdev_nvme_attach_controller" 00:47:06.542 }, 00:47:06.542 { 00:47:06.542 "method": "bdev_wait_for_examine" 00:47:06.542 } 00:47:06.542 ] 00:47:06.542 } 00:47:06.542 ] 00:47:06.542 } 00:47:06.542 [2024-04-18 19:40:22.353940] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:06.542 [2024-04-18 19:40:22.354681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147962 ] 00:47:06.799 [2024-04-18 19:40:22.535826] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:07.057 [2024-04-18 19:40:22.742480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:09.221  Copying: 1024/1024 [kB] (average 1000 MBps) 00:47:09.221 00:47:09.221 19:40:24 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:47:09.221 19:40:24 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:47:09.221 00:47:09.221 real 0m9.838s 00:47:09.221 user 0m7.949s 00:47:09.221 sys 0m1.090s 00:47:09.221 19:40:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:09.221 19:40:24 -- common/autotest_common.sh@10 -- # set +x 00:47:09.221 ************************************ 00:47:09.221 END TEST dd_offset_magic 00:47:09.221 ************************************ 00:47:09.221 19:40:24 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:47:09.221 19:40:24 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:47:09.221 19:40:24 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:47:09.221 19:40:24 -- dd/common.sh@11 -- # local nvme_ref= 00:47:09.221 19:40:24 -- dd/common.sh@12 -- # local size=4194330 00:47:09.221 19:40:24 -- dd/common.sh@14 -- # local bs=1048576 00:47:09.221 19:40:24 -- dd/common.sh@15 -- # local count=5 00:47:09.221 19:40:24 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:47:09.221 19:40:24 -- dd/common.sh@18 -- # gen_conf 00:47:09.221 19:40:24 -- dd/common.sh@31 -- # xtrace_disable 00:47:09.221 19:40:24 -- common/autotest_common.sh@10 -- # set +x 00:47:09.221 [2024-04-18 19:40:24.794725] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:09.221 [2024-04-18 19:40:24.794874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148022 ] 00:47:09.221 { 00:47:09.221 "subsystems": [ 00:47:09.221 { 00:47:09.221 "subsystem": "bdev", 00:47:09.221 "config": [ 00:47:09.221 { 00:47:09.221 "params": { 00:47:09.221 "block_size": 4096, 00:47:09.221 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:47:09.221 "name": "aio1" 00:47:09.221 }, 00:47:09.221 "method": "bdev_aio_create" 00:47:09.221 }, 00:47:09.221 { 00:47:09.221 "params": { 00:47:09.221 "trtype": "pcie", 00:47:09.221 "traddr": "0000:00:10.0", 00:47:09.221 "name": "Nvme0" 00:47:09.221 }, 00:47:09.221 "method": "bdev_nvme_attach_controller" 00:47:09.221 }, 00:47:09.221 { 00:47:09.221 "method": "bdev_wait_for_examine" 00:47:09.221 } 00:47:09.221 ] 00:47:09.221 } 00:47:09.221 ] 00:47:09.221 } 00:47:09.221 [2024-04-18 19:40:24.959776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:09.480 [2024-04-18 19:40:25.241228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:11.489  Copying: 5120/5120 [kB] (average 1250 MBps) 00:47:11.489 00:47:11.489 19:40:27 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:47:11.489 19:40:27 -- dd/common.sh@10 -- # local bdev=aio1 00:47:11.489 19:40:27 -- dd/common.sh@11 -- # local nvme_ref= 00:47:11.489 19:40:27 -- dd/common.sh@12 -- # local size=4194330 00:47:11.489 19:40:27 -- dd/common.sh@14 -- # local bs=1048576 00:47:11.489 19:40:27 -- dd/common.sh@15 -- # local count=5 00:47:11.489 19:40:27 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:47:11.489 19:40:27 -- dd/common.sh@18 -- # gen_conf 00:47:11.489 19:40:27 -- dd/common.sh@31 -- # xtrace_disable 00:47:11.489 19:40:27 -- common/autotest_common.sh@10 -- # set +x 00:47:11.489 [2024-04-18 19:40:27.104783] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:11.489 [2024-04-18 19:40:27.104945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148063 ] 00:47:11.489 { 00:47:11.489 "subsystems": [ 00:47:11.489 { 00:47:11.489 "subsystem": "bdev", 00:47:11.489 "config": [ 00:47:11.489 { 00:47:11.489 "params": { 00:47:11.489 "block_size": 4096, 00:47:11.489 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:47:11.489 "name": "aio1" 00:47:11.489 }, 00:47:11.489 "method": "bdev_aio_create" 00:47:11.489 }, 00:47:11.489 { 00:47:11.489 "params": { 00:47:11.489 "trtype": "pcie", 00:47:11.489 "traddr": "0000:00:10.0", 00:47:11.489 "name": "Nvme0" 00:47:11.489 }, 00:47:11.489 "method": "bdev_nvme_attach_controller" 00:47:11.489 }, 00:47:11.489 { 00:47:11.489 "method": "bdev_wait_for_examine" 00:47:11.489 } 00:47:11.489 ] 00:47:11.489 } 00:47:11.489 ] 00:47:11.489 } 00:47:11.489 [2024-04-18 19:40:27.265042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:11.748 [2024-04-18 19:40:27.473060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:13.439  Copying: 5120/5120 [kB] (average 277 MBps) 00:47:13.439 00:47:13.439 19:40:29 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:47:13.700 ************************************ 00:47:13.700 END TEST spdk_dd_bdev_to_bdev 00:47:13.700 ************************************ 00:47:13.700 00:47:13.700 real 0m22.653s 00:47:13.700 user 0m18.428s 00:47:13.700 sys 0m2.820s 00:47:13.700 19:40:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:13.700 19:40:29 -- common/autotest_common.sh@10 -- # set +x 00:47:13.700 19:40:29 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:47:13.700 19:40:29 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:47:13.700 19:40:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:13.700 19:40:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:13.700 19:40:29 -- common/autotest_common.sh@10 -- # set +x 00:47:13.700 ************************************ 00:47:13.700 START TEST spdk_dd_sparse 00:47:13.700 ************************************ 00:47:13.700 19:40:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:47:13.700 * Looking for test storage... 00:47:13.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:47:13.700 19:40:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:13.700 19:40:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:13.700 19:40:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:13.700 19:40:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:13.700 19:40:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:13.700 19:40:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:13.700 19:40:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:13.700 19:40:29 -- paths/export.sh@5 -- # export PATH 00:47:13.700 19:40:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:13.700 19:40:29 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:47:13.700 19:40:29 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:47:13.700 19:40:29 -- dd/sparse.sh@110 -- # file1=file_zero1 00:47:13.700 19:40:29 -- dd/sparse.sh@111 -- # file2=file_zero2 00:47:13.700 19:40:29 -- dd/sparse.sh@112 -- # file3=file_zero3 00:47:13.700 19:40:29 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:47:13.700 19:40:29 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:47:13.700 19:40:29 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:47:13.700 19:40:29 -- dd/sparse.sh@118 -- # prepare 00:47:13.700 19:40:29 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:47:13.700 19:40:29 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:47:13.700 1+0 records in 00:47:13.700 1+0 records out 00:47:13.700 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0131913 s, 318 MB/s 00:47:13.700 19:40:29 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:47:13.700 1+0 records in 00:47:13.700 1+0 records out 00:47:13.700 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0107855 s, 389 MB/s 00:47:13.700 19:40:29 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:47:13.700 1+0 records in 00:47:13.700 1+0 records out 00:47:13.700 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0108087 s, 388 MB/s 00:47:13.700 19:40:29 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:47:13.700 19:40:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:13.700 19:40:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:13.700 19:40:29 -- common/autotest_common.sh@10 -- # set +x 00:47:13.960 ************************************ 00:47:13.960 START TEST dd_sparse_file_to_file 00:47:13.960 ************************************ 00:47:13.960 19:40:29 -- common/autotest_common.sh@1111 -- # file_to_file 00:47:13.960 19:40:29 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:47:13.960 19:40:29 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:47:13.960 19:40:29 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:47:13.960 19:40:29 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:47:13.960 19:40:29 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:47:13.960 19:40:29 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:47:13.960 19:40:29 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:47:13.961 19:40:29 -- dd/sparse.sh@41 -- # gen_conf 00:47:13.961 19:40:29 -- dd/common.sh@31 -- # xtrace_disable 00:47:13.961 19:40:29 -- common/autotest_common.sh@10 -- # set +x 00:47:13.961 [2024-04-18 19:40:29.723683] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:13.961 [2024-04-18 19:40:29.723848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148160 ] 00:47:13.961 { 00:47:13.961 "subsystems": [ 00:47:13.961 { 00:47:13.961 "subsystem": "bdev", 00:47:13.961 "config": [ 00:47:13.961 { 00:47:13.961 "params": { 00:47:13.961 "block_size": 4096, 00:47:13.961 "filename": "dd_sparse_aio_disk", 00:47:13.961 "name": "dd_aio" 00:47:13.961 }, 00:47:13.961 "method": "bdev_aio_create" 00:47:13.961 }, 00:47:13.961 { 00:47:13.961 "params": { 00:47:13.961 "lvs_name": "dd_lvstore", 00:47:13.961 "bdev_name": "dd_aio" 00:47:13.961 }, 00:47:13.961 "method": "bdev_lvol_create_lvstore" 00:47:13.961 }, 00:47:13.961 { 00:47:13.961 "method": "bdev_wait_for_examine" 00:47:13.961 } 00:47:13.961 ] 00:47:13.961 } 00:47:13.961 ] 00:47:13.961 } 00:47:14.222 [2024-04-18 19:40:29.887971] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:14.487 [2024-04-18 19:40:30.149405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:16.172  Copying: 12/36 [MB] (average 1000 MBps) 00:47:16.172 00:47:16.172 19:40:32 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:47:16.172 19:40:32 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:47:16.172 19:40:32 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:47:16.172 19:40:32 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:47:16.172 19:40:32 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:47:16.172 19:40:32 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:47:16.172 19:40:32 -- dd/sparse.sh@52 -- # stat1_b=24576 00:47:16.172 19:40:32 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:47:16.172 19:40:32 -- dd/sparse.sh@53 -- # stat2_b=24576 00:47:16.172 19:40:32 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:47:16.172 00:47:16.172 real 0m2.387s 00:47:16.172 user 0m2.016s 00:47:16.172 sys 0m0.250s 00:47:16.172 19:40:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:16.172 19:40:32 -- common/autotest_common.sh@10 -- # set +x 00:47:16.172 ************************************ 00:47:16.172 END TEST dd_sparse_file_to_file 00:47:16.172 ************************************ 00:47:16.172 19:40:32 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:47:16.172 19:40:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:16.172 19:40:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:16.172 19:40:32 -- common/autotest_common.sh@10 -- # set +x 00:47:16.439 ************************************ 00:47:16.439 START TEST dd_sparse_file_to_bdev 00:47:16.439 ************************************ 00:47:16.439 19:40:32 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:47:16.439 19:40:32 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:47:16.439 19:40:32 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:47:16.439 19:40:32 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:47:16.439 19:40:32 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:47:16.439 19:40:32 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:47:16.439 19:40:32 -- dd/sparse.sh@73 -- # gen_conf 00:47:16.439 19:40:32 -- dd/common.sh@31 -- # xtrace_disable 00:47:16.439 19:40:32 -- common/autotest_common.sh@10 -- # set +x 00:47:16.439 [2024-04-18 19:40:32.197867] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:16.439 [2024-04-18 19:40:32.198192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148236 ] 00:47:16.439 { 00:47:16.439 "subsystems": [ 00:47:16.439 { 00:47:16.439 "subsystem": "bdev", 00:47:16.439 "config": [ 00:47:16.439 { 00:47:16.439 "params": { 00:47:16.439 "block_size": 4096, 00:47:16.440 "filename": "dd_sparse_aio_disk", 00:47:16.440 "name": "dd_aio" 00:47:16.440 }, 00:47:16.440 "method": "bdev_aio_create" 00:47:16.440 }, 00:47:16.440 { 00:47:16.440 "params": { 00:47:16.440 "lvs_name": "dd_lvstore", 00:47:16.440 "thin_provision": true, 00:47:16.440 "lvol_name": "dd_lvol", 00:47:16.440 "size": 37748736 00:47:16.440 }, 00:47:16.440 "method": "bdev_lvol_create" 00:47:16.440 }, 00:47:16.440 { 00:47:16.440 "method": "bdev_wait_for_examine" 00:47:16.440 } 00:47:16.440 ] 00:47:16.440 } 00:47:16.440 ] 00:47:16.440 } 00:47:16.440 [2024-04-18 19:40:32.360921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:16.698 [2024-04-18 19:40:32.565639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:17.308 [2024-04-18 19:40:32.909270] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:47:17.308  Copying: 12/36 [MB] (average 521 MBps)[2024-04-18 19:40:32.971095] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:47:18.709 00:47:18.709 00:47:18.709 ************************************ 00:47:18.709 END TEST dd_sparse_file_to_bdev 00:47:18.709 ************************************ 00:47:18.709 00:47:18.709 real 0m2.328s 00:47:18.709 user 0m2.002s 00:47:18.709 sys 0m0.233s 00:47:18.709 19:40:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:18.709 19:40:34 -- common/autotest_common.sh@10 -- # set +x 00:47:18.709 19:40:34 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:47:18.709 19:40:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:18.709 19:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:18.709 19:40:34 -- common/autotest_common.sh@10 -- # set +x 00:47:18.709 ************************************ 00:47:18.709 START TEST dd_sparse_bdev_to_file 00:47:18.709 ************************************ 00:47:18.709 19:40:34 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:47:18.709 19:40:34 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:47:18.709 19:40:34 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:47:18.709 19:40:34 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:47:18.709 19:40:34 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:47:18.709 19:40:34 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:47:18.709 19:40:34 -- dd/sparse.sh@91 -- # gen_conf 00:47:18.709 19:40:34 -- dd/common.sh@31 -- # xtrace_disable 00:47:18.709 19:40:34 -- common/autotest_common.sh@10 -- # set +x 00:47:18.709 { 00:47:18.709 "subsystems": [ 00:47:18.709 { 00:47:18.709 "subsystem": "bdev", 00:47:18.709 "config": [ 00:47:18.709 { 00:47:18.709 "params": { 00:47:18.709 "block_size": 4096, 00:47:18.709 "filename": "dd_sparse_aio_disk", 00:47:18.709 "name": "dd_aio" 00:47:18.709 }, 00:47:18.709 "method": "bdev_aio_create" 00:47:18.709 }, 00:47:18.709 { 00:47:18.709 "method": "bdev_wait_for_examine" 00:47:18.709 } 00:47:18.709 ] 00:47:18.709 } 00:47:18.709 ] 00:47:18.709 } 00:47:18.967 [2024-04-18 19:40:34.635883] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:18.967 [2024-04-18 19:40:34.636408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148315 ] 00:47:18.967 [2024-04-18 19:40:34.827242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:19.225 [2024-04-18 19:40:35.080845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:21.165  Copying: 12/36 [MB] (average 1000 MBps) 00:47:21.165 00:47:21.165 19:40:36 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:47:21.165 19:40:36 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:47:21.165 19:40:36 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:47:21.165 19:40:36 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:47:21.165 19:40:36 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:47:21.165 19:40:36 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:47:21.165 19:40:36 -- dd/sparse.sh@102 -- # stat2_b=24576 00:47:21.165 19:40:36 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:47:21.165 19:40:36 -- dd/sparse.sh@103 -- # stat3_b=24576 00:47:21.165 19:40:36 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:47:21.165 00:47:21.165 real 0m2.384s 00:47:21.165 user 0m2.003s 00:47:21.165 sys 0m0.278s 00:47:21.165 ************************************ 00:47:21.165 END TEST dd_sparse_bdev_to_file 00:47:21.165 ************************************ 00:47:21.165 19:40:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:21.165 19:40:36 -- common/autotest_common.sh@10 -- # set +x 00:47:21.165 19:40:36 -- dd/sparse.sh@1 -- # cleanup 00:47:21.166 19:40:36 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:47:21.166 19:40:36 -- dd/sparse.sh@12 -- # rm file_zero1 00:47:21.166 19:40:37 -- dd/sparse.sh@13 -- # rm file_zero2 00:47:21.166 19:40:37 -- dd/sparse.sh@14 -- # rm file_zero3 00:47:21.166 ************************************ 00:47:21.166 END TEST spdk_dd_sparse 00:47:21.166 ************************************ 00:47:21.166 00:47:21.166 real 0m7.556s 00:47:21.166 user 0m6.230s 00:47:21.166 sys 0m1.014s 00:47:21.166 19:40:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:21.166 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:47:21.166 19:40:37 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:47:21.166 19:40:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:21.166 19:40:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:21.166 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:47:21.424 ************************************ 00:47:21.424 START TEST spdk_dd_negative 00:47:21.424 ************************************ 00:47:21.424 19:40:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:47:21.424 * Looking for test storage... 00:47:21.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:47:21.424 19:40:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:21.424 19:40:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:21.424 19:40:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:21.424 19:40:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:21.424 19:40:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:21.424 19:40:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:21.424 19:40:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:21.424 19:40:37 -- paths/export.sh@5 -- # export PATH 00:47:21.424 19:40:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:21.424 19:40:37 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:21.424 19:40:37 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:21.424 19:40:37 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:21.424 19:40:37 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:21.424 19:40:37 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:47:21.424 19:40:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:21.424 19:40:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:21.424 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:47:21.424 ************************************ 00:47:21.424 START TEST dd_invalid_arguments 00:47:21.424 ************************************ 00:47:21.424 19:40:37 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:47:21.424 19:40:37 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:47:21.424 19:40:37 -- common/autotest_common.sh@638 -- # local es=0 00:47:21.424 19:40:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:47:21.424 19:40:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.424 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:21.424 19:40:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.424 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:21.424 19:40:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.424 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:21.424 19:40:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.424 19:40:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:21.424 19:40:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:47:21.682 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:47:21.682 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:47:21.682 00:47:21.682 CPU options: 00:47:21.682 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:47:21.682 (like [0,1,10]) 00:47:21.682 --lcores lcore to CPU mapping list. The list is in the format: 00:47:21.682 [<,lcores[@CPUs]>...] 00:47:21.682 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:47:21.682 Within the group, '-' is used for range separator, 00:47:21.682 ',' is used for single number separator. 00:47:21.682 '( )' can be omitted for single element group, 00:47:21.682 '@' can be omitted if cpus and lcores have the same value 00:47:21.682 --disable-cpumask-locks Disable CPU core lock files. 00:47:21.682 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:47:21.682 pollers in the app support interrupt mode) 00:47:21.682 -p, --main-core main (primary) core for DPDK 00:47:21.682 00:47:21.682 Configuration options: 00:47:21.682 -c, --config, --json JSON config file 00:47:21.682 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:47:21.682 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:47:21.682 --wait-for-rpc wait for RPCs to initialize subsystems 00:47:21.682 --rpcs-allowed comma-separated list of permitted RPCS 00:47:21.682 --json-ignore-init-errors don't exit on invalid config entry 00:47:21.682 00:47:21.682 Memory options: 00:47:21.682 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:47:21.682 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:47:21.682 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:47:21.682 -R, --huge-unlink unlink huge files after initialization 00:47:21.682 -n, --mem-channels number of memory channels used for DPDK 00:47:21.682 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:47:21.682 --msg-mempool-size global message memory pool size in count (default: 262143) 00:47:21.682 --no-huge run without using hugepages 00:47:21.682 -i, --shm-id shared memory ID (optional) 00:47:21.682 -g, --single-file-segments force creating just one hugetlbfs file 00:47:21.682 00:47:21.682 PCI options: 00:47:21.682 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:47:21.683 -B, --pci-blocked pci addr to block (can be used more than once) 00:47:21.683 -u, --no-pci disable PCI access 00:47:21.683 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:47:21.683 00:47:21.683 Log options: 00:47:21.683 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:47:21.683 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:47:21.683 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:47:21.683 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:47:21.683 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:47:21.683 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:47:21.683 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:47:21.683 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:47:21.683 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:47:21.683 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:47:21.683 virtio_vfio_user, vmd) 00:47:21.683 --silence-noticelog disable notice level logging to stderr 00:47:21.683 00:47:21.683 Trace options: 00:47:21.683 --num-trace-entries number of trace entries for each core, must be power of 2, 00:47:21.683 setting 0 to disable trace (default 32768) 00:47:21.683 Tracepoints vary in size and can use more than one trace entry. 00:47:21.683 -e, --tpoint-group [:] 00:47:21.683 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:47:21.683 [2024-04-18 19:40:37.358209] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:47:21.683 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:47:21.683 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:47:21.683 a tracepoint group. First tpoint inside a group can be enabled by 00:47:21.683 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:47:21.683 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:47:21.683 in /include/spdk_internal/trace_defs.h 00:47:21.683 00:47:21.683 Other options: 00:47:21.683 -h, --help show this usage 00:47:21.683 -v, --version print SPDK version 00:47:21.683 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:47:21.683 --env-context Opaque context for use of the env implementation 00:47:21.683 00:47:21.683 Application specific: 00:47:21.683 [--------- DD Options ---------] 00:47:21.683 --if Input file. Must specify either --if or --ib. 00:47:21.683 --ib Input bdev. Must specifier either --if or --ib 00:47:21.683 --of Output file. Must specify either --of or --ob. 00:47:21.683 --ob Output bdev. Must specify either --of or --ob. 00:47:21.683 --iflag Input file flags. 00:47:21.683 --oflag Output file flags. 00:47:21.683 --bs I/O unit size (default: 4096) 00:47:21.683 --qd Queue depth (default: 2) 00:47:21.683 --count I/O unit count. The number of I/O units to copy. (default: all) 00:47:21.683 --skip Skip this many I/O units at start of input. (default: 0) 00:47:21.683 --seek Skip this many I/O units at start of output. (default: 0) 00:47:21.683 --aio Force usage of AIO. (by default io_uring is used if available) 00:47:21.683 --sparse Enable hole skipping in input target 00:47:21.683 Available iflag and oflag values: 00:47:21.683 append - append mode 00:47:21.683 direct - use direct I/O for data 00:47:21.683 directory - fail unless a directory 00:47:21.683 dsync - use synchronized I/O for data 00:47:21.683 noatime - do not update access time 00:47:21.683 noctty - do not assign controlling terminal from file 00:47:21.683 nofollow - do not follow symlinks 00:47:21.683 nonblock - use non-blocking I/O 00:47:21.683 sync - use synchronized I/O for data and metadata 00:47:21.683 19:40:37 -- common/autotest_common.sh@641 -- # es=2 00:47:21.683 ************************************ 00:47:21.683 END TEST dd_invalid_arguments 00:47:21.683 ************************************ 00:47:21.683 19:40:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:21.683 19:40:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:47:21.683 19:40:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:21.683 00:47:21.683 real 0m0.132s 00:47:21.683 user 0m0.073s 00:47:21.683 sys 0m0.058s 00:47:21.683 19:40:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:21.683 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:47:21.683 19:40:37 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:47:21.683 19:40:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:21.683 19:40:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:21.683 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:47:21.683 ************************************ 00:47:21.683 START TEST dd_double_input 00:47:21.683 ************************************ 00:47:21.683 19:40:37 -- common/autotest_common.sh@1111 -- # double_input 00:47:21.683 19:40:37 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:47:21.683 19:40:37 -- common/autotest_common.sh@638 -- # local es=0 00:47:21.683 19:40:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:47:21.683 19:40:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.683 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:21.683 19:40:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.683 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:21.683 19:40:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.683 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:21.683 19:40:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.683 19:40:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:21.683 19:40:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:47:21.683 [2024-04-18 19:40:37.581067] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:47:21.942 19:40:37 -- common/autotest_common.sh@641 -- # es=22 00:47:21.942 19:40:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:21.942 19:40:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:47:21.942 19:40:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:21.942 00:47:21.942 real 0m0.145s 00:47:21.942 user 0m0.092s 00:47:21.942 sys 0m0.050s 00:47:21.942 19:40:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:21.942 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:47:21.942 ************************************ 00:47:21.942 END TEST dd_double_input 00:47:21.942 ************************************ 00:47:21.942 19:40:37 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:47:21.942 19:40:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:21.942 19:40:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:21.942 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:47:21.942 ************************************ 00:47:21.942 START TEST dd_double_output 00:47:21.942 ************************************ 00:47:21.942 19:40:37 -- common/autotest_common.sh@1111 -- # double_output 00:47:21.942 19:40:37 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:47:21.942 19:40:37 -- common/autotest_common.sh@638 -- # local es=0 00:47:21.942 19:40:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:47:21.942 19:40:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.942 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:21.942 19:40:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.942 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:21.942 19:40:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.942 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:21.942 19:40:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:21.942 19:40:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:21.942 19:40:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:47:21.942 [2024-04-18 19:40:37.814295] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:47:21.942 19:40:37 -- common/autotest_common.sh@641 -- # es=22 00:47:21.942 19:40:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:21.942 19:40:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:47:21.942 19:40:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:21.942 00:47:21.942 real 0m0.112s 00:47:21.942 user 0m0.049s 00:47:21.942 sys 0m0.060s 00:47:21.942 19:40:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:21.942 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:47:21.942 ************************************ 00:47:21.942 END TEST dd_double_output 00:47:21.942 ************************************ 00:47:22.199 19:40:37 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:47:22.199 19:40:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:22.199 19:40:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:22.199 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:47:22.199 ************************************ 00:47:22.199 START TEST dd_no_input 00:47:22.199 ************************************ 00:47:22.199 19:40:37 -- common/autotest_common.sh@1111 -- # no_input 00:47:22.199 19:40:37 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:47:22.199 19:40:37 -- common/autotest_common.sh@638 -- # local es=0 00:47:22.199 19:40:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:47:22.199 19:40:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.199 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.199 19:40:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.199 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.199 19:40:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.199 19:40:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.199 19:40:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.199 19:40:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:22.199 19:40:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:47:22.199 [2024-04-18 19:40:38.036606] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:47:22.199 19:40:38 -- common/autotest_common.sh@641 -- # es=22 00:47:22.199 19:40:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:22.199 19:40:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:47:22.199 19:40:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:22.200 00:47:22.200 real 0m0.144s 00:47:22.200 user 0m0.067s 00:47:22.200 sys 0m0.075s 00:47:22.200 19:40:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:22.200 19:40:38 -- common/autotest_common.sh@10 -- # set +x 00:47:22.200 ************************************ 00:47:22.200 END TEST dd_no_input 00:47:22.200 ************************************ 00:47:22.457 19:40:38 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:47:22.457 19:40:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:22.457 19:40:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:22.457 19:40:38 -- common/autotest_common.sh@10 -- # set +x 00:47:22.457 ************************************ 00:47:22.457 START TEST dd_no_output 00:47:22.457 ************************************ 00:47:22.457 19:40:38 -- common/autotest_common.sh@1111 -- # no_output 00:47:22.457 19:40:38 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:22.457 19:40:38 -- common/autotest_common.sh@638 -- # local es=0 00:47:22.457 19:40:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:22.457 19:40:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.457 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.457 19:40:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.457 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.457 19:40:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.457 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.457 19:40:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.457 19:40:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:22.457 19:40:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:22.457 [2024-04-18 19:40:38.267173] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:47:22.457 19:40:38 -- common/autotest_common.sh@641 -- # es=22 00:47:22.457 19:40:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:22.457 19:40:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:47:22.457 19:40:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:22.457 00:47:22.457 real 0m0.131s 00:47:22.457 user 0m0.057s 00:47:22.457 sys 0m0.073s 00:47:22.457 19:40:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:22.457 19:40:38 -- common/autotest_common.sh@10 -- # set +x 00:47:22.457 ************************************ 00:47:22.457 END TEST dd_no_output 00:47:22.457 ************************************ 00:47:22.457 19:40:38 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:47:22.457 19:40:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:22.457 19:40:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:22.457 19:40:38 -- common/autotest_common.sh@10 -- # set +x 00:47:22.715 ************************************ 00:47:22.715 START TEST dd_wrong_blocksize 00:47:22.715 ************************************ 00:47:22.715 19:40:38 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:47:22.715 19:40:38 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:47:22.715 19:40:38 -- common/autotest_common.sh@638 -- # local es=0 00:47:22.715 19:40:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:47:22.715 19:40:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.715 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.715 19:40:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.715 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.715 19:40:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.715 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.715 19:40:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.715 19:40:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:22.715 19:40:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:47:22.715 [2024-04-18 19:40:38.492039] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:47:22.715 19:40:38 -- common/autotest_common.sh@641 -- # es=22 00:47:22.715 19:40:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:22.715 19:40:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:47:22.715 19:40:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:22.715 00:47:22.715 real 0m0.134s 00:47:22.715 user 0m0.066s 00:47:22.715 sys 0m0.070s 00:47:22.715 19:40:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:22.715 19:40:38 -- common/autotest_common.sh@10 -- # set +x 00:47:22.715 ************************************ 00:47:22.715 END TEST dd_wrong_blocksize 00:47:22.715 ************************************ 00:47:22.715 19:40:38 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:47:22.715 19:40:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:22.715 19:40:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:22.715 19:40:38 -- common/autotest_common.sh@10 -- # set +x 00:47:22.973 ************************************ 00:47:22.973 START TEST dd_smaller_blocksize 00:47:22.973 ************************************ 00:47:22.973 19:40:38 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:47:22.973 19:40:38 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:47:22.973 19:40:38 -- common/autotest_common.sh@638 -- # local es=0 00:47:22.973 19:40:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:47:22.973 19:40:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.973 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.973 19:40:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.973 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.973 19:40:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.973 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:22.973 19:40:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:22.973 19:40:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:22.973 19:40:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:47:22.973 [2024-04-18 19:40:38.704525] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:22.973 [2024-04-18 19:40:38.704696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148631 ] 00:47:22.973 [2024-04-18 19:40:38.872431] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:23.539 [2024-04-18 19:40:39.157563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:24.106 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:47:24.106 [2024-04-18 19:40:39.888796] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:47:24.106 [2024-04-18 19:40:39.888890] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:25.041 [2024-04-18 19:40:40.757998] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:47:25.299 19:40:41 -- common/autotest_common.sh@641 -- # es=244 00:47:25.299 19:40:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:25.299 19:40:41 -- common/autotest_common.sh@650 -- # es=116 00:47:25.299 19:40:41 -- common/autotest_common.sh@651 -- # case "$es" in 00:47:25.299 19:40:41 -- common/autotest_common.sh@658 -- # es=1 00:47:25.299 ************************************ 00:47:25.299 END TEST dd_smaller_blocksize 00:47:25.299 ************************************ 00:47:25.299 19:40:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:25.299 00:47:25.299 real 0m2.559s 00:47:25.299 user 0m1.963s 00:47:25.299 sys 0m0.496s 00:47:25.299 19:40:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:25.299 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:47:25.558 19:40:41 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:47:25.558 19:40:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:25.558 19:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:25.558 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:47:25.558 ************************************ 00:47:25.558 START TEST dd_invalid_count 00:47:25.558 ************************************ 00:47:25.558 19:40:41 -- common/autotest_common.sh@1111 -- # invalid_count 00:47:25.558 19:40:41 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:47:25.558 19:40:41 -- common/autotest_common.sh@638 -- # local es=0 00:47:25.558 19:40:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:47:25.558 19:40:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:25.558 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:25.558 19:40:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:25.558 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:25.558 19:40:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:25.558 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:25.558 19:40:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:25.558 19:40:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:25.558 19:40:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:47:25.558 [2024-04-18 19:40:41.392326] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:47:25.558 ************************************ 00:47:25.558 END TEST dd_invalid_count 00:47:25.558 ************************************ 00:47:25.558 19:40:41 -- common/autotest_common.sh@641 -- # es=22 00:47:25.558 19:40:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:25.558 19:40:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:47:25.558 19:40:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:25.558 00:47:25.558 real 0m0.148s 00:47:25.558 user 0m0.080s 00:47:25.558 sys 0m0.068s 00:47:25.558 19:40:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:25.558 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:47:25.816 19:40:41 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:47:25.816 19:40:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:25.816 19:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:25.817 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:47:25.817 ************************************ 00:47:25.817 START TEST dd_invalid_oflag 00:47:25.817 ************************************ 00:47:25.817 19:40:41 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:47:25.817 19:40:41 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:47:25.817 19:40:41 -- common/autotest_common.sh@638 -- # local es=0 00:47:25.817 19:40:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:47:25.817 19:40:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:25.817 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:25.817 19:40:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:25.817 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:25.817 19:40:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:25.817 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:25.817 19:40:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:25.817 19:40:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:25.817 19:40:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:47:25.817 [2024-04-18 19:40:41.606976] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:47:25.817 ************************************ 00:47:25.817 END TEST dd_invalid_oflag 00:47:25.817 ************************************ 00:47:25.817 19:40:41 -- common/autotest_common.sh@641 -- # es=22 00:47:25.817 19:40:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:25.817 19:40:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:47:25.817 19:40:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:25.817 00:47:25.817 real 0m0.125s 00:47:25.817 user 0m0.060s 00:47:25.817 sys 0m0.064s 00:47:25.817 19:40:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:25.817 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:47:25.817 19:40:41 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:47:25.817 19:40:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:25.817 19:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:25.817 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:47:26.079 ************************************ 00:47:26.079 START TEST dd_invalid_iflag 00:47:26.079 ************************************ 00:47:26.079 19:40:41 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:47:26.079 19:40:41 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:47:26.079 19:40:41 -- common/autotest_common.sh@638 -- # local es=0 00:47:26.079 19:40:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:47:26.079 19:40:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:26.079 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:26.080 19:40:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:26.080 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:26.080 19:40:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:26.080 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:26.080 19:40:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:26.080 19:40:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:26.080 19:40:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:47:26.080 [2024-04-18 19:40:41.823936] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:47:26.080 19:40:41 -- common/autotest_common.sh@641 -- # es=22 00:47:26.080 19:40:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:26.080 19:40:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:47:26.080 19:40:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:26.080 00:47:26.080 real 0m0.132s 00:47:26.080 user 0m0.062s 00:47:26.080 sys 0m0.069s 00:47:26.080 19:40:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:26.080 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:47:26.080 ************************************ 00:47:26.080 END TEST dd_invalid_iflag 00:47:26.080 ************************************ 00:47:26.080 19:40:41 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:47:26.080 19:40:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:26.080 19:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:26.080 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:47:26.080 ************************************ 00:47:26.080 START TEST dd_unknown_flag 00:47:26.080 ************************************ 00:47:26.080 19:40:41 -- common/autotest_common.sh@1111 -- # unknown_flag 00:47:26.080 19:40:41 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:47:26.080 19:40:41 -- common/autotest_common.sh@638 -- # local es=0 00:47:26.080 19:40:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:47:26.080 19:40:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:26.080 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:26.080 19:40:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:26.338 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:26.338 19:40:42 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:26.338 19:40:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:26.338 19:40:42 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:26.338 19:40:42 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:26.338 19:40:42 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:47:26.338 [2024-04-18 19:40:42.084178] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:26.338 [2024-04-18 19:40:42.084807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148780 ] 00:47:26.596 [2024-04-18 19:40:42.264907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:26.596 [2024-04-18 19:40:42.495409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:27.161 [2024-04-18 19:40:42.814411] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:47:27.162  Copying: 0/0 [B] (average 0 Bps)[2024-04-18 19:40:42.814648] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:27.162 [2024-04-18 19:40:42.814912] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:47:28.095 [2024-04-18 19:40:43.656897] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:47:28.353 00:47:28.353 00:47:28.353 ************************************ 00:47:28.353 END TEST dd_unknown_flag 00:47:28.353 ************************************ 00:47:28.353 19:40:44 -- common/autotest_common.sh@641 -- # es=234 00:47:28.353 19:40:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:28.353 19:40:44 -- common/autotest_common.sh@650 -- # es=106 00:47:28.354 19:40:44 -- common/autotest_common.sh@651 -- # case "$es" in 00:47:28.354 19:40:44 -- common/autotest_common.sh@658 -- # es=1 00:47:28.354 19:40:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:28.354 00:47:28.354 real 0m2.147s 00:47:28.354 user 0m1.790s 00:47:28.354 sys 0m0.224s 00:47:28.354 19:40:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:28.354 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:47:28.354 19:40:44 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:47:28.354 19:40:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:47:28.354 19:40:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:28.354 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:47:28.354 ************************************ 00:47:28.354 START TEST dd_invalid_json 00:47:28.354 ************************************ 00:47:28.354 19:40:44 -- common/autotest_common.sh@1111 -- # invalid_json 00:47:28.354 19:40:44 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:47:28.354 19:40:44 -- dd/negative_dd.sh@95 -- # : 00:47:28.354 19:40:44 -- common/autotest_common.sh@638 -- # local es=0 00:47:28.354 19:40:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:47:28.354 19:40:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:28.354 19:40:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:28.354 19:40:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:28.354 19:40:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:28.354 19:40:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:28.354 19:40:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:47:28.354 19:40:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:28.354 19:40:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:28.354 19:40:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:47:28.612 [2024-04-18 19:40:44.312006] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:28.612 [2024-04-18 19:40:44.312423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148857 ] 00:47:28.612 [2024-04-18 19:40:44.495512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:28.870 [2024-04-18 19:40:44.712629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:28.870 [2024-04-18 19:40:44.712905] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:47:28.870 [2024-04-18 19:40:44.713026] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:47:28.870 [2024-04-18 19:40:44.713121] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:28.870 [2024-04-18 19:40:44.713298] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:47:29.467 ************************************ 00:47:29.467 END TEST dd_invalid_json 00:47:29.467 ************************************ 00:47:29.467 19:40:45 -- common/autotest_common.sh@641 -- # es=234 00:47:29.467 19:40:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:47:29.467 19:40:45 -- common/autotest_common.sh@650 -- # es=106 00:47:29.467 19:40:45 -- common/autotest_common.sh@651 -- # case "$es" in 00:47:29.467 19:40:45 -- common/autotest_common.sh@658 -- # es=1 00:47:29.467 19:40:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:47:29.467 00:47:29.467 real 0m0.937s 00:47:29.467 user 0m0.666s 00:47:29.467 sys 0m0.168s 00:47:29.467 19:40:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:29.467 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:47:29.467 ************************************ 00:47:29.467 END TEST spdk_dd_negative 00:47:29.467 ************************************ 00:47:29.467 00:47:29.467 real 0m8.108s 00:47:29.467 user 0m5.642s 00:47:29.467 sys 0m2.101s 00:47:29.467 19:40:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:29.467 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:47:29.467 ************************************ 00:47:29.467 END TEST spdk_dd 00:47:29.467 ************************************ 00:47:29.467 00:47:29.467 real 3m7.051s 00:47:29.467 user 2m34.906s 00:47:29.467 sys 0m22.302s 00:47:29.467 19:40:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:29.467 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:47:29.467 19:40:45 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:47:29.467 19:40:45 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:47:29.467 19:40:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:47:29.467 19:40:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:29.467 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:47:29.467 ************************************ 00:47:29.467 START TEST blockdev_nvme 00:47:29.467 ************************************ 00:47:29.467 19:40:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:47:29.725 * Looking for test storage... 00:47:29.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:47:29.725 19:40:45 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:47:29.725 19:40:45 -- bdev/nbd_common.sh@6 -- # set -e 00:47:29.725 19:40:45 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:47:29.725 19:40:45 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:29.725 19:40:45 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:47:29.725 19:40:45 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:47:29.725 19:40:45 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:47:29.725 19:40:45 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:47:29.725 19:40:45 -- bdev/blockdev.sh@20 -- # : 00:47:29.725 19:40:45 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:47:29.725 19:40:45 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:47:29.725 19:40:45 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:47:29.725 19:40:45 -- bdev/blockdev.sh@674 -- # uname -s 00:47:29.725 19:40:45 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:47:29.725 19:40:45 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:47:29.725 19:40:45 -- bdev/blockdev.sh@682 -- # test_type=nvme 00:47:29.725 19:40:45 -- bdev/blockdev.sh@683 -- # crypto_device= 00:47:29.725 19:40:45 -- bdev/blockdev.sh@684 -- # dek= 00:47:29.725 19:40:45 -- bdev/blockdev.sh@685 -- # env_ctx= 00:47:29.725 19:40:45 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:47:29.725 19:40:45 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:47:29.725 19:40:45 -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:47:29.725 19:40:45 -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:47:29.725 19:40:45 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:47:29.725 19:40:45 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=148961 00:47:29.725 19:40:45 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:47:29.725 19:40:45 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:47:29.725 19:40:45 -- bdev/blockdev.sh@49 -- # waitforlisten 148961 00:47:29.725 19:40:45 -- common/autotest_common.sh@817 -- # '[' -z 148961 ']' 00:47:29.725 19:40:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:29.725 19:40:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:47:29.725 19:40:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:29.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:29.725 19:40:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:47:29.725 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:47:29.725 [2024-04-18 19:40:45.557725] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:29.725 [2024-04-18 19:40:45.558117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148961 ] 00:47:29.983 [2024-04-18 19:40:45.736966] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:30.239 [2024-04-18 19:40:46.030883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:31.170 19:40:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:47:31.170 19:40:47 -- common/autotest_common.sh@850 -- # return 0 00:47:31.170 19:40:47 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:47:31.170 19:40:47 -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:47:31.170 19:40:47 -- bdev/blockdev.sh@81 -- # local json 00:47:31.170 19:40:47 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:47:31.170 19:40:47 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:31.427 19:40:47 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:47:31.427 19:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:47:31.427 19:40:47 -- common/autotest_common.sh@10 -- # set +x 00:47:31.427 19:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:47:31.427 19:40:47 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:47:31.427 19:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:47:31.427 19:40:47 -- common/autotest_common.sh@10 -- # set +x 00:47:31.427 19:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:47:31.427 19:40:47 -- bdev/blockdev.sh@740 -- # cat 00:47:31.427 19:40:47 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:47:31.427 19:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:47:31.427 19:40:47 -- common/autotest_common.sh@10 -- # set +x 00:47:31.427 19:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:47:31.427 19:40:47 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:47:31.427 19:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:47:31.427 19:40:47 -- common/autotest_common.sh@10 -- # set +x 00:47:31.427 19:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:47:31.427 19:40:47 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:47:31.427 19:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:47:31.427 19:40:47 -- common/autotest_common.sh@10 -- # set +x 00:47:31.427 19:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:47:31.427 19:40:47 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:47:31.427 19:40:47 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:47:31.427 19:40:47 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:47:31.427 19:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:47:31.427 19:40:47 -- common/autotest_common.sh@10 -- # set +x 00:47:31.427 19:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:47:31.427 19:40:47 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:47:31.427 19:40:47 -- bdev/blockdev.sh@749 -- # jq -r .name 00:47:31.427 19:40:47 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7f9cf65e-06fe-4c70-96fa-2649376d8334"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7f9cf65e-06fe-4c70-96fa-2649376d8334",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:47:31.427 19:40:47 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:47:31.685 19:40:47 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:47:31.685 19:40:47 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:47:31.685 19:40:47 -- bdev/blockdev.sh@754 -- # killprocess 148961 00:47:31.685 19:40:47 -- common/autotest_common.sh@936 -- # '[' -z 148961 ']' 00:47:31.685 19:40:47 -- common/autotest_common.sh@940 -- # kill -0 148961 00:47:31.685 19:40:47 -- common/autotest_common.sh@941 -- # uname 00:47:31.685 19:40:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:47:31.685 19:40:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148961 00:47:31.685 19:40:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:47:31.685 19:40:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:47:31.685 19:40:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148961' 00:47:31.685 killing process with pid 148961 00:47:31.685 19:40:47 -- common/autotest_common.sh@955 -- # kill 148961 00:47:31.685 19:40:47 -- common/autotest_common.sh@960 -- # wait 148961 00:47:34.212 19:40:50 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:34.212 19:40:50 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:47:34.212 19:40:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:47:34.212 19:40:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:34.212 19:40:50 -- common/autotest_common.sh@10 -- # set +x 00:47:34.212 ************************************ 00:47:34.212 START TEST bdev_hello_world 00:47:34.212 ************************************ 00:47:34.212 19:40:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:47:34.470 [2024-04-18 19:40:50.176281] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:34.470 [2024-04-18 19:40:50.176660] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149060 ] 00:47:34.470 [2024-04-18 19:40:50.360275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:34.728 [2024-04-18 19:40:50.644703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:35.294 [2024-04-18 19:40:51.155897] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:47:35.294 [2024-04-18 19:40:51.156131] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:47:35.294 [2024-04-18 19:40:51.156210] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:47:35.294 [2024-04-18 19:40:51.159614] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:47:35.294 [2024-04-18 19:40:51.160148] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:47:35.294 [2024-04-18 19:40:51.160316] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:47:35.294 [2024-04-18 19:40:51.160551] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:47:35.294 00:47:35.294 [2024-04-18 19:40:51.160696] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:47:37.210 ************************************ 00:47:37.210 END TEST bdev_hello_world 00:47:37.210 ************************************ 00:47:37.210 00:47:37.210 real 0m2.523s 00:47:37.210 user 0m2.191s 00:47:37.210 sys 0m0.229s 00:47:37.210 19:40:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:37.210 19:40:52 -- common/autotest_common.sh@10 -- # set +x 00:47:37.210 19:40:52 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:47:37.210 19:40:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:47:37.211 19:40:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:37.211 19:40:52 -- common/autotest_common.sh@10 -- # set +x 00:47:37.211 ************************************ 00:47:37.211 START TEST bdev_bounds 00:47:37.211 ************************************ 00:47:37.211 19:40:52 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:47:37.211 Process bdevio pid: 149121 00:47:37.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:37.211 19:40:52 -- bdev/blockdev.sh@290 -- # bdevio_pid=149121 00:47:37.211 19:40:52 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:37.211 19:40:52 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:47:37.211 19:40:52 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 149121' 00:47:37.211 19:40:52 -- bdev/blockdev.sh@293 -- # waitforlisten 149121 00:47:37.211 19:40:52 -- common/autotest_common.sh@817 -- # '[' -z 149121 ']' 00:47:37.211 19:40:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:37.211 19:40:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:47:37.211 19:40:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:37.211 19:40:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:47:37.211 19:40:52 -- common/autotest_common.sh@10 -- # set +x 00:47:37.211 [2024-04-18 19:40:52.785345] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:37.211 [2024-04-18 19:40:52.785687] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149121 ] 00:47:37.211 [2024-04-18 19:40:52.961109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:37.467 [2024-04-18 19:40:53.196266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:37.467 [2024-04-18 19:40:53.196425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:37.467 [2024-04-18 19:40:53.196433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:47:38.032 19:40:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:47:38.032 19:40:53 -- common/autotest_common.sh@850 -- # return 0 00:47:38.032 19:40:53 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:47:38.032 I/O targets: 00:47:38.032 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:47:38.032 00:47:38.032 00:47:38.032 CUnit - A unit testing framework for C - Version 2.1-3 00:47:38.032 http://cunit.sourceforge.net/ 00:47:38.032 00:47:38.032 00:47:38.032 Suite: bdevio tests on: Nvme0n1 00:47:38.032 Test: blockdev write read block ...passed 00:47:38.032 Test: blockdev write zeroes read block ...passed 00:47:38.032 Test: blockdev write zeroes read no split ...passed 00:47:38.032 Test: blockdev write zeroes read split ...passed 00:47:38.032 Test: blockdev write zeroes read split partial ...passed 00:47:38.032 Test: blockdev reset ...[2024-04-18 19:40:53.936857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:47:38.032 [2024-04-18 19:40:53.942847] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:47:38.032 passed 00:47:38.032 Test: blockdev write read 8 blocks ...passed 00:47:38.032 Test: blockdev write read size > 128k ...passed 00:47:38.032 Test: blockdev write read invalid size ...passed 00:47:38.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:47:38.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:47:38.032 Test: blockdev write read max offset ...passed 00:47:38.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:47:38.032 Test: blockdev writev readv 8 blocks ...passed 00:47:38.032 Test: blockdev writev readv 30 x 1block ...passed 00:47:38.032 Test: blockdev writev readv block ...passed 00:47:38.032 Test: blockdev writev readv size > 128k ...passed 00:47:38.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:47:38.032 Test: blockdev comparev and writev ...[2024-04-18 19:40:53.952371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x3220d000 len:0x1000 00:47:38.032 [2024-04-18 19:40:53.952586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:47:38.032 passed 00:47:38.032 Test: blockdev nvme passthru rw ...passed 00:47:38.032 Test: blockdev nvme passthru vendor specific ...[2024-04-18 19:40:53.953508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:47:38.032 [2024-04-18 19:40:53.953654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:47:38.032 passed 00:47:38.290 Test: blockdev nvme admin passthru ...passed 00:47:38.290 Test: blockdev copy ...passed 00:47:38.290 00:47:38.290 Run Summary: Type Total Ran Passed Failed Inactive 00:47:38.290 suites 1 1 n/a 0 0 00:47:38.290 tests 23 23 23 0 0 00:47:38.290 asserts 152 152 152 0 n/a 00:47:38.290 00:47:38.290 Elapsed time = 0.271 seconds 00:47:38.290 0 00:47:38.290 19:40:53 -- bdev/blockdev.sh@295 -- # killprocess 149121 00:47:38.290 19:40:53 -- common/autotest_common.sh@936 -- # '[' -z 149121 ']' 00:47:38.290 19:40:53 -- common/autotest_common.sh@940 -- # kill -0 149121 00:47:38.290 19:40:53 -- common/autotest_common.sh@941 -- # uname 00:47:38.290 19:40:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:47:38.290 19:40:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149121 00:47:38.290 killing process with pid 149121 00:47:38.290 19:40:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:47:38.290 19:40:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:47:38.290 19:40:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149121' 00:47:38.290 19:40:53 -- common/autotest_common.sh@955 -- # kill 149121 00:47:38.290 19:40:53 -- common/autotest_common.sh@960 -- # wait 149121 00:47:40.187 ************************************ 00:47:40.187 END TEST bdev_bounds 00:47:40.187 ************************************ 00:47:40.187 19:40:55 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:47:40.187 00:47:40.187 real 0m2.967s 00:47:40.187 user 0m6.958s 00:47:40.187 sys 0m0.371s 00:47:40.187 19:40:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:40.187 19:40:55 -- common/autotest_common.sh@10 -- # set +x 00:47:40.187 19:40:55 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:47:40.187 19:40:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:47:40.187 19:40:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:40.187 19:40:55 -- common/autotest_common.sh@10 -- # set +x 00:47:40.187 ************************************ 00:47:40.187 START TEST bdev_nbd 00:47:40.187 ************************************ 00:47:40.187 19:40:55 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:47:40.187 19:40:55 -- bdev/blockdev.sh@300 -- # uname -s 00:47:40.187 19:40:55 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:47:40.187 19:40:55 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:40.187 19:40:55 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:40.187 19:40:55 -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:47:40.187 19:40:55 -- bdev/blockdev.sh@304 -- # local bdev_all 00:47:40.187 19:40:55 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:47:40.187 19:40:55 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:47:40.187 19:40:55 -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:47:40.187 19:40:55 -- bdev/blockdev.sh@311 -- # local nbd_all 00:47:40.187 19:40:55 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:47:40.187 19:40:55 -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:47:40.187 19:40:55 -- bdev/blockdev.sh@314 -- # local nbd_list 00:47:40.187 19:40:55 -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:47:40.187 19:40:55 -- bdev/blockdev.sh@315 -- # local bdev_list 00:47:40.187 19:40:55 -- bdev/blockdev.sh@318 -- # nbd_pid=149217 00:47:40.187 19:40:55 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:47:40.187 19:40:55 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:40.187 19:40:55 -- bdev/blockdev.sh@320 -- # waitforlisten 149217 /var/tmp/spdk-nbd.sock 00:47:40.187 19:40:55 -- common/autotest_common.sh@817 -- # '[' -z 149217 ']' 00:47:40.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:47:40.187 19:40:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:47:40.187 19:40:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:47:40.187 19:40:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:47:40.187 19:40:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:47:40.187 19:40:55 -- common/autotest_common.sh@10 -- # set +x 00:47:40.187 [2024-04-18 19:40:55.843088] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:40.187 [2024-04-18 19:40:55.843445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:40.187 [2024-04-18 19:40:56.012792] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:40.445 [2024-04-18 19:40:56.309744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:41.010 19:40:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:47:41.010 19:40:56 -- common/autotest_common.sh@850 -- # return 0 00:47:41.010 19:40:56 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:47:41.010 19:40:56 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:41.010 19:40:56 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:47:41.010 19:40:56 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:47:41.010 19:40:56 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:47:41.010 19:40:56 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:41.010 19:40:56 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:47:41.010 19:40:56 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:47:41.010 19:40:56 -- bdev/nbd_common.sh@24 -- # local i 00:47:41.010 19:40:56 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:47:41.011 19:40:56 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:47:41.011 19:40:56 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:47:41.011 19:40:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:47:41.269 19:40:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:47:41.269 19:40:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:47:41.269 19:40:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:47:41.269 19:40:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:47:41.269 19:40:57 -- common/autotest_common.sh@855 -- # local i 00:47:41.269 19:40:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:47:41.269 19:40:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:47:41.269 19:40:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:47:41.269 19:40:57 -- common/autotest_common.sh@859 -- # break 00:47:41.269 19:40:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:47:41.269 19:40:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:47:41.269 19:40:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:41.269 1+0 records in 00:47:41.269 1+0 records out 00:47:41.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035431 s, 11.6 MB/s 00:47:41.269 19:40:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:41.269 19:40:57 -- common/autotest_common.sh@872 -- # size=4096 00:47:41.269 19:40:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:41.269 19:40:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:47:41.269 19:40:57 -- common/autotest_common.sh@875 -- # return 0 00:47:41.269 19:40:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:47:41.269 19:40:57 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:47:41.269 19:40:57 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:41.527 19:40:57 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:47:41.527 { 00:47:41.527 "nbd_device": "/dev/nbd0", 00:47:41.527 "bdev_name": "Nvme0n1" 00:47:41.527 } 00:47:41.527 ]' 00:47:41.527 19:40:57 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:47:41.527 19:40:57 -- bdev/nbd_common.sh@119 -- # echo '[ 00:47:41.527 { 00:47:41.527 "nbd_device": "/dev/nbd0", 00:47:41.527 "bdev_name": "Nvme0n1" 00:47:41.527 } 00:47:41.527 ]' 00:47:41.527 19:40:57 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@51 -- # local i 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@41 -- # break 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@45 -- # return 0 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:41.785 19:40:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:42.043 19:40:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:42.043 19:40:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:42.043 19:40:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@65 -- # true 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@65 -- # count=0 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@122 -- # count=0 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@127 -- # return 0 00:47:42.301 19:40:57 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@12 -- # local i 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:47:42.301 19:40:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:47:42.301 /dev/nbd0 00:47:42.301 19:40:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:47:42.301 19:40:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:47:42.301 19:40:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:47:42.301 19:40:58 -- common/autotest_common.sh@855 -- # local i 00:47:42.301 19:40:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:47:42.301 19:40:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:47:42.301 19:40:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:47:42.301 19:40:58 -- common/autotest_common.sh@859 -- # break 00:47:42.301 19:40:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:47:42.301 19:40:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:47:42.301 19:40:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:42.301 1+0 records in 00:47:42.301 1+0 records out 00:47:42.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420772 s, 9.7 MB/s 00:47:42.301 19:40:58 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:42.301 19:40:58 -- common/autotest_common.sh@872 -- # size=4096 00:47:42.301 19:40:58 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:42.301 19:40:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:47:42.301 19:40:58 -- common/autotest_common.sh@875 -- # return 0 00:47:42.301 19:40:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:42.301 19:40:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:47:42.301 19:40:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:42.301 19:40:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:42.301 19:40:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:42.560 19:40:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:47:42.560 { 00:47:42.560 "nbd_device": "/dev/nbd0", 00:47:42.560 "bdev_name": "Nvme0n1" 00:47:42.560 } 00:47:42.560 ]' 00:47:42.560 19:40:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:47:42.560 { 00:47:42.560 "nbd_device": "/dev/nbd0", 00:47:42.560 "bdev_name": "Nvme0n1" 00:47:42.560 } 00:47:42.560 ]' 00:47:42.560 19:40:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@65 -- # count=1 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@66 -- # echo 1 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@95 -- # count=1 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:47:42.819 256+0 records in 00:47:42.819 256+0 records out 00:47:42.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00707644 s, 148 MB/s 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:47:42.819 256+0 records in 00:47:42.819 256+0 records out 00:47:42.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0468351 s, 22.4 MB/s 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@51 -- # local i 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:42.819 19:40:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@41 -- # break 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@45 -- # return 0 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:43.078 19:40:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@65 -- # true 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@65 -- # count=0 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@104 -- # count=0 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@109 -- # return 0 00:47:43.346 19:40:59 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:47:43.346 19:40:59 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:47:43.605 malloc_lvol_verify 00:47:43.605 19:40:59 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:47:43.874 ec4aba77-5012-4e27-8aa6-d60cd8dcb21e 00:47:43.874 19:40:59 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:47:44.132 15b1ca0b-08cb-4744-b926-281403c690e4 00:47:44.132 19:40:59 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:47:44.389 /dev/nbd0 00:47:44.389 19:41:00 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:47:44.389 mke2fs 1.45.5 (07-Jan-2020) 00:47:44.389 00:47:44.389 Filesystem too small for a journal 00:47:44.389 Creating filesystem with 1024 4k blocks and 1024 inodes 00:47:44.389 00:47:44.389 Allocating group tables: 0/1 done 00:47:44.389 Writing inode tables: 0/1 done 00:47:44.389 Writing superblocks and filesystem accounting information: 0/1 done 00:47:44.389 00:47:44.389 19:41:00 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:47:44.389 19:41:00 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:44.389 19:41:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:44.389 19:41:00 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:47:44.389 19:41:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:44.389 19:41:00 -- bdev/nbd_common.sh@51 -- # local i 00:47:44.389 19:41:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:44.389 19:41:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@41 -- # break 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@45 -- # return 0 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:47:44.646 19:41:00 -- bdev/nbd_common.sh@147 -- # return 0 00:47:44.646 19:41:00 -- bdev/blockdev.sh@326 -- # killprocess 149217 00:47:44.646 19:41:00 -- common/autotest_common.sh@936 -- # '[' -z 149217 ']' 00:47:44.646 19:41:00 -- common/autotest_common.sh@940 -- # kill -0 149217 00:47:44.646 19:41:00 -- common/autotest_common.sh@941 -- # uname 00:47:44.646 19:41:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:47:44.646 19:41:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149217 00:47:44.646 19:41:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:47:44.646 killing process with pid 149217 00:47:44.646 19:41:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:47:44.646 19:41:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149217' 00:47:44.646 19:41:00 -- common/autotest_common.sh@955 -- # kill 149217 00:47:44.646 19:41:00 -- common/autotest_common.sh@960 -- # wait 149217 00:47:46.546 ************************************ 00:47:46.546 END TEST bdev_nbd 00:47:46.546 ************************************ 00:47:46.546 19:41:01 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:47:46.546 00:47:46.546 real 0m6.194s 00:47:46.546 user 0m8.625s 00:47:46.546 sys 0m1.351s 00:47:46.546 19:41:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:46.546 19:41:01 -- common/autotest_common.sh@10 -- # set +x 00:47:46.546 19:41:02 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:47:46.546 19:41:02 -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:47:46.546 skipping fio tests on NVMe due to multi-ns failures. 00:47:46.546 19:41:02 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:47:46.546 19:41:02 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:46.546 19:41:02 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:47:46.546 19:41:02 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:47:46.546 19:41:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:46.546 19:41:02 -- common/autotest_common.sh@10 -- # set +x 00:47:46.546 ************************************ 00:47:46.546 START TEST bdev_verify 00:47:46.546 ************************************ 00:47:46.546 19:41:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:47:46.546 [2024-04-18 19:41:02.124829] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:46.546 [2024-04-18 19:41:02.125133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149426 ] 00:47:46.546 [2024-04-18 19:41:02.307563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:46.809 [2024-04-18 19:41:02.603520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:46.809 [2024-04-18 19:41:02.603530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:47.404 Running I/O for 5 seconds... 00:47:52.667 00:47:52.667 Latency(us) 00:47:52.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:52.667 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:47:52.667 Verification LBA range: start 0x0 length 0xa0000 00:47:52.667 Nvme0n1 : 5.01 9623.03 37.59 0.00 0.00 13229.87 772.39 24092.28 00:47:52.667 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:47:52.667 Verification LBA range: start 0xa0000 length 0xa0000 00:47:52.667 Nvme0n1 : 5.01 9472.70 37.00 0.00 0.00 13436.98 745.08 22094.99 00:47:52.667 =================================================================================================================== 00:47:52.667 Total : 19095.72 74.59 0.00 0.00 13332.64 745.08 24092.28 00:47:54.120 00:47:54.120 real 0m7.878s 00:47:54.120 user 0m14.317s 00:47:54.120 sys 0m0.269s 00:47:54.120 19:41:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:54.120 ************************************ 00:47:54.120 END TEST bdev_verify 00:47:54.120 ************************************ 00:47:54.120 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:47:54.120 19:41:09 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:47:54.120 19:41:09 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:47:54.120 19:41:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:54.120 19:41:09 -- common/autotest_common.sh@10 -- # set +x 00:47:54.121 ************************************ 00:47:54.121 START TEST bdev_verify_big_io 00:47:54.121 ************************************ 00:47:54.121 19:41:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:47:54.385 [2024-04-18 19:41:10.066744] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:47:54.385 [2024-04-18 19:41:10.067111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149553 ] 00:47:54.385 [2024-04-18 19:41:10.235137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:54.643 [2024-04-18 19:41:10.497872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:54.643 [2024-04-18 19:41:10.497886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:55.210 Running I/O for 5 seconds... 00:48:00.474 00:48:00.474 Latency(us) 00:48:00.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:00.474 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:48:00.474 Verification LBA range: start 0x0 length 0xa000 00:48:00.474 Nvme0n1 : 5.07 699.03 43.69 0.00 0.00 178441.61 581.24 283614.84 00:48:00.474 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:48:00.474 Verification LBA range: start 0xa000 length 0xa000 00:48:00.474 Nvme0n1 : 5.06 776.69 48.54 0.00 0.00 161311.81 920.62 291603.99 00:48:00.474 =================================================================================================================== 00:48:00.474 Total : 1475.72 92.23 0.00 0.00 169437.75 581.24 291603.99 00:48:02.377 00:48:02.377 real 0m8.027s 00:48:02.377 user 0m14.678s 00:48:02.377 sys 0m0.268s 00:48:02.377 19:41:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:02.377 19:41:18 -- common/autotest_common.sh@10 -- # set +x 00:48:02.377 ************************************ 00:48:02.377 END TEST bdev_verify_big_io 00:48:02.377 ************************************ 00:48:02.377 19:41:18 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:02.377 19:41:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:48:02.377 19:41:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:02.377 19:41:18 -- common/autotest_common.sh@10 -- # set +x 00:48:02.377 ************************************ 00:48:02.377 START TEST bdev_write_zeroes 00:48:02.377 ************************************ 00:48:02.377 19:41:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:02.377 [2024-04-18 19:41:18.220932] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:02.377 [2024-04-18 19:41:18.221175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149692 ] 00:48:02.646 [2024-04-18 19:41:18.415052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:02.922 [2024-04-18 19:41:18.666484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:03.487 Running I/O for 1 seconds... 00:48:04.421 00:48:04.421 Latency(us) 00:48:04.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:04.421 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:48:04.421 Nvme0n1 : 1.00 52836.28 206.39 0.00 0.00 2416.66 936.23 10236.10 00:48:04.421 =================================================================================================================== 00:48:04.421 Total : 52836.28 206.39 0.00 0.00 2416.66 936.23 10236.10 00:48:06.323 00:48:06.323 real 0m3.632s 00:48:06.323 user 0m3.275s 00:48:06.323 sys 0m0.257s 00:48:06.323 19:41:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:06.323 19:41:21 -- common/autotest_common.sh@10 -- # set +x 00:48:06.323 ************************************ 00:48:06.323 END TEST bdev_write_zeroes 00:48:06.323 ************************************ 00:48:06.323 19:41:21 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:06.323 19:41:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:48:06.323 19:41:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:06.323 19:41:21 -- common/autotest_common.sh@10 -- # set +x 00:48:06.323 ************************************ 00:48:06.323 START TEST bdev_json_nonenclosed 00:48:06.323 ************************************ 00:48:06.323 19:41:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:06.323 [2024-04-18 19:41:21.918064] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:06.323 [2024-04-18 19:41:21.918409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149752 ] 00:48:06.323 [2024-04-18 19:41:22.079122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:06.582 [2024-04-18 19:41:22.303594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:06.582 [2024-04-18 19:41:22.303708] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:48:06.582 [2024-04-18 19:41:22.303742] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:06.582 [2024-04-18 19:41:22.303783] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:07.150 00:48:07.150 real 0m0.944s 00:48:07.150 user 0m0.751s 00:48:07.150 sys 0m0.093s 00:48:07.150 19:41:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:07.150 19:41:22 -- common/autotest_common.sh@10 -- # set +x 00:48:07.150 ************************************ 00:48:07.150 END TEST bdev_json_nonenclosed 00:48:07.150 ************************************ 00:48:07.150 19:41:22 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:07.150 19:41:22 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:48:07.150 19:41:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:07.150 19:41:22 -- common/autotest_common.sh@10 -- # set +x 00:48:07.150 ************************************ 00:48:07.150 START TEST bdev_json_nonarray 00:48:07.150 ************************************ 00:48:07.150 19:41:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:07.150 [2024-04-18 19:41:22.964512] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:07.150 [2024-04-18 19:41:22.964712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149794 ] 00:48:07.406 [2024-04-18 19:41:23.148813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:07.664 [2024-04-18 19:41:23.429786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:07.664 [2024-04-18 19:41:23.429908] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:48:07.664 [2024-04-18 19:41:23.429944] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:07.664 [2024-04-18 19:41:23.429968] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:08.231 00:48:08.231 real 0m1.028s 00:48:08.231 user 0m0.771s 00:48:08.231 sys 0m0.157s 00:48:08.231 19:41:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:08.231 ************************************ 00:48:08.231 END TEST bdev_json_nonarray 00:48:08.231 ************************************ 00:48:08.231 19:41:23 -- common/autotest_common.sh@10 -- # set +x 00:48:08.231 19:41:23 -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:48:08.231 19:41:23 -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:48:08.231 19:41:23 -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:48:08.231 19:41:23 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:48:08.231 19:41:23 -- bdev/blockdev.sh@811 -- # cleanup 00:48:08.231 19:41:23 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:48:08.231 19:41:23 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:08.231 19:41:23 -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:48:08.231 19:41:23 -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:48:08.231 19:41:23 -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:48:08.231 19:41:23 -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:48:08.231 ************************************ 00:48:08.231 END TEST blockdev_nvme 00:48:08.231 ************************************ 00:48:08.231 00:48:08.231 real 0m38.622s 00:48:08.231 user 0m56.695s 00:48:08.231 sys 0m3.908s 00:48:08.231 19:41:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:08.231 19:41:23 -- common/autotest_common.sh@10 -- # set +x 00:48:08.231 19:41:24 -- spdk/autotest.sh@209 -- # uname -s 00:48:08.231 19:41:24 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:48:08.231 19:41:24 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:48:08.231 19:41:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:48:08.231 19:41:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:08.231 19:41:24 -- common/autotest_common.sh@10 -- # set +x 00:48:08.231 ************************************ 00:48:08.231 START TEST blockdev_nvme_gpt 00:48:08.231 ************************************ 00:48:08.231 19:41:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:48:08.231 * Looking for test storage... 00:48:08.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:48:08.231 19:41:24 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:48:08.231 19:41:24 -- bdev/nbd_common.sh@6 -- # set -e 00:48:08.231 19:41:24 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:48:08.231 19:41:24 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:08.231 19:41:24 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:48:08.231 19:41:24 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:48:08.231 19:41:24 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:48:08.231 19:41:24 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:48:08.231 19:41:24 -- bdev/blockdev.sh@20 -- # : 00:48:08.231 19:41:24 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:48:08.231 19:41:24 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:48:08.231 19:41:24 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:48:08.231 19:41:24 -- bdev/blockdev.sh@674 -- # uname -s 00:48:08.231 19:41:24 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:48:08.231 19:41:24 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:48:08.231 19:41:24 -- bdev/blockdev.sh@682 -- # test_type=gpt 00:48:08.231 19:41:24 -- bdev/blockdev.sh@683 -- # crypto_device= 00:48:08.231 19:41:24 -- bdev/blockdev.sh@684 -- # dek= 00:48:08.231 19:41:24 -- bdev/blockdev.sh@685 -- # env_ctx= 00:48:08.231 19:41:24 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:48:08.231 19:41:24 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:48:08.231 19:41:24 -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:48:08.231 19:41:24 -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:48:08.231 19:41:24 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:48:08.231 19:41:24 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=149893 00:48:08.231 19:41:24 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:48:08.231 19:41:24 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:48:08.231 19:41:24 -- bdev/blockdev.sh@49 -- # waitforlisten 149893 00:48:08.231 19:41:24 -- common/autotest_common.sh@817 -- # '[' -z 149893 ']' 00:48:08.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:08.231 19:41:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:08.231 19:41:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:48:08.231 19:41:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:08.231 19:41:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:48:08.231 19:41:24 -- common/autotest_common.sh@10 -- # set +x 00:48:08.489 [2024-04-18 19:41:24.246701] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:08.489 [2024-04-18 19:41:24.246892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149893 ] 00:48:08.747 [2024-04-18 19:41:24.423794] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:09.004 [2024-04-18 19:41:24.721813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:09.938 19:41:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:48:09.938 19:41:25 -- common/autotest_common.sh@850 -- # return 0 00:48:09.938 19:41:25 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:48:09.938 19:41:25 -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:48:09.938 19:41:25 -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:48:10.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:10.196 Waiting for block devices as requested 00:48:10.455 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:48:10.455 19:41:26 -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:48:10.455 19:41:26 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:48:10.455 19:41:26 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:48:10.455 19:41:26 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:48:10.455 19:41:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:48:10.455 19:41:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:48:10.455 19:41:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:48:10.455 19:41:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:48:10.455 19:41:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:48:10.455 19:41:26 -- bdev/blockdev.sh@107 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:48:10.455 19:41:26 -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:48:10.455 19:41:26 -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:48:10.455 19:41:26 -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:48:10.455 19:41:26 -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:48:10.455 19:41:26 -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:48:10.455 19:41:26 -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:48:10.455 19:41:26 -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:48:10.455 BYT; 00:48:10.455 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:48:10.455 19:41:26 -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:48:10.455 BYT; 00:48:10.455 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:48:10.455 19:41:26 -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:48:10.455 19:41:26 -- bdev/blockdev.sh@116 -- # break 00:48:10.455 19:41:26 -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:48:10.455 19:41:26 -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:48:10.455 19:41:26 -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:48:10.455 19:41:26 -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:48:10.713 19:41:26 -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:48:10.713 19:41:26 -- scripts/common.sh@408 -- # local spdk_guid 00:48:10.713 19:41:26 -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:48:10.713 19:41:26 -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:48:10.713 19:41:26 -- scripts/common.sh@413 -- # IFS='()' 00:48:10.713 19:41:26 -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:48:10.713 19:41:26 -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:48:10.713 19:41:26 -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:48:10.713 19:41:26 -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:48:10.713 19:41:26 -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:48:10.713 19:41:26 -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:48:10.713 19:41:26 -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:48:10.713 19:41:26 -- scripts/common.sh@420 -- # local spdk_guid 00:48:10.713 19:41:26 -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:48:10.713 19:41:26 -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:48:10.713 19:41:26 -- scripts/common.sh@425 -- # IFS='()' 00:48:10.713 19:41:26 -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:48:10.713 19:41:26 -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:48:10.713 19:41:26 -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:48:10.713 19:41:26 -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:48:10.713 19:41:26 -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:48:10.713 19:41:26 -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:48:10.713 19:41:26 -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:48:12.085 The operation has completed successfully. 00:48:12.085 19:41:27 -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:48:13.019 The operation has completed successfully. 00:48:13.019 19:41:28 -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:48:13.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:13.534 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:48:14.467 19:41:30 -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:48:14.467 19:41:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:14.467 19:41:30 -- common/autotest_common.sh@10 -- # set +x 00:48:14.467 [] 00:48:14.467 19:41:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:14.468 19:41:30 -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:48:14.468 19:41:30 -- bdev/blockdev.sh@81 -- # local json 00:48:14.468 19:41:30 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:48:14.468 19:41:30 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:48:14.468 19:41:30 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:48:14.468 19:41:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:14.468 19:41:30 -- common/autotest_common.sh@10 -- # set +x 00:48:14.468 19:41:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:14.468 19:41:30 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:48:14.468 19:41:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:14.468 19:41:30 -- common/autotest_common.sh@10 -- # set +x 00:48:14.468 19:41:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:14.468 19:41:30 -- bdev/blockdev.sh@740 -- # cat 00:48:14.468 19:41:30 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:48:14.468 19:41:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:14.468 19:41:30 -- common/autotest_common.sh@10 -- # set +x 00:48:14.468 19:41:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:14.468 19:41:30 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:48:14.468 19:41:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:14.468 19:41:30 -- common/autotest_common.sh@10 -- # set +x 00:48:14.468 19:41:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:14.468 19:41:30 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:48:14.468 19:41:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:14.468 19:41:30 -- common/autotest_common.sh@10 -- # set +x 00:48:14.468 19:41:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:14.468 19:41:30 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:48:14.468 19:41:30 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:48:14.468 19:41:30 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:48:14.468 19:41:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:14.468 19:41:30 -- common/autotest_common.sh@10 -- # set +x 00:48:14.468 19:41:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:14.468 19:41:30 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:48:14.468 19:41:30 -- bdev/blockdev.sh@749 -- # jq -r .name 00:48:14.468 19:41:30 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:48:14.468 19:41:30 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:48:14.468 19:41:30 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:48:14.468 19:41:30 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:48:14.468 19:41:30 -- bdev/blockdev.sh@754 -- # killprocess 149893 00:48:14.468 19:41:30 -- common/autotest_common.sh@936 -- # '[' -z 149893 ']' 00:48:14.468 19:41:30 -- common/autotest_common.sh@940 -- # kill -0 149893 00:48:14.468 19:41:30 -- common/autotest_common.sh@941 -- # uname 00:48:14.468 19:41:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:48:14.468 19:41:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149893 00:48:14.468 19:41:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:48:14.468 19:41:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:48:14.468 19:41:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149893' 00:48:14.468 killing process with pid 149893 00:48:14.468 19:41:30 -- common/autotest_common.sh@955 -- # kill 149893 00:48:14.468 19:41:30 -- common/autotest_common.sh@960 -- # wait 149893 00:48:17.751 19:41:33 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:48:17.751 19:41:33 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:48:17.751 19:41:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:48:17.751 19:41:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:17.751 19:41:33 -- common/autotest_common.sh@10 -- # set +x 00:48:17.751 ************************************ 00:48:17.751 START TEST bdev_hello_world 00:48:17.751 ************************************ 00:48:17.751 19:41:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:48:17.751 [2024-04-18 19:41:33.202608] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:17.751 [2024-04-18 19:41:33.202818] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150401 ] 00:48:17.751 [2024-04-18 19:41:33.381782] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:17.751 [2024-04-18 19:41:33.615169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:18.318 [2024-04-18 19:41:34.131127] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:48:18.318 [2024-04-18 19:41:34.131205] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:48:18.318 [2024-04-18 19:41:34.131237] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:48:18.318 [2024-04-18 19:41:34.134603] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:48:18.318 [2024-04-18 19:41:34.135043] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:48:18.318 [2024-04-18 19:41:34.135089] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:48:18.318 [2024-04-18 19:41:34.135483] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:48:18.318 00:48:18.318 [2024-04-18 19:41:34.135675] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:48:20.220 00:48:20.220 real 0m2.518s 00:48:20.220 user 0m2.169s 00:48:20.220 sys 0m0.249s 00:48:20.220 19:41:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:20.220 19:41:35 -- common/autotest_common.sh@10 -- # set +x 00:48:20.220 ************************************ 00:48:20.220 END TEST bdev_hello_world 00:48:20.220 ************************************ 00:48:20.220 19:41:35 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:48:20.220 19:41:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:48:20.220 19:41:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:20.220 19:41:35 -- common/autotest_common.sh@10 -- # set +x 00:48:20.220 ************************************ 00:48:20.220 START TEST bdev_bounds 00:48:20.220 ************************************ 00:48:20.220 19:41:35 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:48:20.220 19:41:35 -- bdev/blockdev.sh@290 -- # bdevio_pid=150474 00:48:20.220 19:41:35 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:48:20.220 Process bdevio pid: 150474 00:48:20.220 19:41:35 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 150474' 00:48:20.220 19:41:35 -- bdev/blockdev.sh@293 -- # waitforlisten 150474 00:48:20.220 19:41:35 -- common/autotest_common.sh@817 -- # '[' -z 150474 ']' 00:48:20.220 19:41:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:20.220 19:41:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:48:20.220 19:41:35 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:48:20.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:20.220 19:41:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:20.220 19:41:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:48:20.220 19:41:35 -- common/autotest_common.sh@10 -- # set +x 00:48:20.220 [2024-04-18 19:41:35.825876] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:20.220 [2024-04-18 19:41:35.826316] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150474 ] 00:48:20.220 [2024-04-18 19:41:36.016549] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:48:20.479 [2024-04-18 19:41:36.256538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:20.479 [2024-04-18 19:41:36.256611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:48:20.479 [2024-04-18 19:41:36.256616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:21.045 19:41:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:48:21.045 19:41:36 -- common/autotest_common.sh@850 -- # return 0 00:48:21.045 19:41:36 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:48:21.045 I/O targets: 00:48:21.045 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:48:21.045 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:48:21.045 00:48:21.045 00:48:21.045 CUnit - A unit testing framework for C - Version 2.1-3 00:48:21.045 http://cunit.sourceforge.net/ 00:48:21.045 00:48:21.045 00:48:21.045 Suite: bdevio tests on: Nvme0n1p2 00:48:21.045 Test: blockdev write read block ...passed 00:48:21.046 Test: blockdev write zeroes read block ...passed 00:48:21.046 Test: blockdev write zeroes read no split ...passed 00:48:21.046 Test: blockdev write zeroes read split ...passed 00:48:21.304 Test: blockdev write zeroes read split partial ...passed 00:48:21.304 Test: blockdev reset ...[2024-04-18 19:41:36.998669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:48:21.304 [2024-04-18 19:41:37.003278] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:48:21.304 passed 00:48:21.304 Test: blockdev write read 8 blocks ...passed 00:48:21.304 Test: blockdev write read size > 128k ...passed 00:48:21.304 Test: blockdev write read invalid size ...passed 00:48:21.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:48:21.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:48:21.304 Test: blockdev write read max offset ...passed 00:48:21.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:48:21.304 Test: blockdev writev readv 8 blocks ...passed 00:48:21.304 Test: blockdev writev readv 30 x 1block ...passed 00:48:21.304 Test: blockdev writev readv block ...passed 00:48:21.304 Test: blockdev writev readv size > 128k ...passed 00:48:21.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:48:21.304 Test: blockdev comparev and writev ...[2024-04-18 19:41:37.010915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x9f60b000 len:0x1000 00:48:21.304 [2024-04-18 19:41:37.011010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:48:21.304 passed 00:48:21.304 Test: blockdev nvme passthru rw ...passed 00:48:21.304 Test: blockdev nvme passthru vendor specific ...passed 00:48:21.304 Test: blockdev nvme admin passthru ...passed 00:48:21.304 Test: blockdev copy ...passed 00:48:21.304 Suite: bdevio tests on: Nvme0n1p1 00:48:21.304 Test: blockdev write read block ...passed 00:48:21.304 Test: blockdev write zeroes read block ...passed 00:48:21.304 Test: blockdev write zeroes read no split ...passed 00:48:21.304 Test: blockdev write zeroes read split ...passed 00:48:21.304 Test: blockdev write zeroes read split partial ...passed 00:48:21.304 Test: blockdev reset ...[2024-04-18 19:41:37.095204] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:48:21.304 [2024-04-18 19:41:37.099224] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:48:21.304 passed 00:48:21.304 Test: blockdev write read 8 blocks ...passed 00:48:21.304 Test: blockdev write read size > 128k ...passed 00:48:21.304 Test: blockdev write read invalid size ...passed 00:48:21.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:48:21.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:48:21.304 Test: blockdev write read max offset ...passed 00:48:21.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:48:21.304 Test: blockdev writev readv 8 blocks ...passed 00:48:21.304 Test: blockdev writev readv 30 x 1block ...passed 00:48:21.304 Test: blockdev writev readv block ...passed 00:48:21.304 Test: blockdev writev readv size > 128k ...passed 00:48:21.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:48:21.304 Test: blockdev comparev and writev ...[2024-04-18 19:41:37.106318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x9f60d000 len:0x1000 00:48:21.304 [2024-04-18 19:41:37.106403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:48:21.304 passed 00:48:21.304 Test: blockdev nvme passthru rw ...passed 00:48:21.304 Test: blockdev nvme passthru vendor specific ...passed 00:48:21.304 Test: blockdev nvme admin passthru ...passed 00:48:21.304 Test: blockdev copy ...passed 00:48:21.304 00:48:21.304 Run Summary: Type Total Ran Passed Failed Inactive 00:48:21.305 suites 2 2 n/a 0 0 00:48:21.305 tests 46 46 46 0 0 00:48:21.305 asserts 284 284 284 0 n/a 00:48:21.305 00:48:21.305 Elapsed time = 0.533 seconds 00:48:21.305 0 00:48:21.305 19:41:37 -- bdev/blockdev.sh@295 -- # killprocess 150474 00:48:21.305 19:41:37 -- common/autotest_common.sh@936 -- # '[' -z 150474 ']' 00:48:21.305 19:41:37 -- common/autotest_common.sh@940 -- # kill -0 150474 00:48:21.305 19:41:37 -- common/autotest_common.sh@941 -- # uname 00:48:21.305 19:41:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:48:21.305 19:41:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150474 00:48:21.305 19:41:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:48:21.305 19:41:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:48:21.305 19:41:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150474' 00:48:21.305 killing process with pid 150474 00:48:21.305 19:41:37 -- common/autotest_common.sh@955 -- # kill 150474 00:48:21.305 19:41:37 -- common/autotest_common.sh@960 -- # wait 150474 00:48:23.205 19:41:38 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:48:23.205 00:48:23.205 real 0m2.958s 00:48:23.205 user 0m6.892s 00:48:23.205 sys 0m0.383s 00:48:23.205 19:41:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:23.205 ************************************ 00:48:23.205 19:41:38 -- common/autotest_common.sh@10 -- # set +x 00:48:23.205 END TEST bdev_bounds 00:48:23.205 ************************************ 00:48:23.205 19:41:38 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:48:23.205 19:41:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:48:23.205 19:41:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:23.205 19:41:38 -- common/autotest_common.sh@10 -- # set +x 00:48:23.205 ************************************ 00:48:23.205 START TEST bdev_nbd 00:48:23.205 ************************************ 00:48:23.205 19:41:38 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:48:23.205 19:41:38 -- bdev/blockdev.sh@300 -- # uname -s 00:48:23.205 19:41:38 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:48:23.205 19:41:38 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:23.205 19:41:38 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:23.205 19:41:38 -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:48:23.205 19:41:38 -- bdev/blockdev.sh@304 -- # local bdev_all 00:48:23.205 19:41:38 -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:48:23.205 19:41:38 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:48:23.205 19:41:38 -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:48:23.205 19:41:38 -- bdev/blockdev.sh@311 -- # local nbd_all 00:48:23.205 19:41:38 -- bdev/blockdev.sh@312 -- # bdev_num=2 00:48:23.205 19:41:38 -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:48:23.205 19:41:38 -- bdev/blockdev.sh@314 -- # local nbd_list 00:48:23.205 19:41:38 -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:48:23.205 19:41:38 -- bdev/blockdev.sh@315 -- # local bdev_list 00:48:23.205 19:41:38 -- bdev/blockdev.sh@318 -- # nbd_pid=150553 00:48:23.206 19:41:38 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:48:23.206 19:41:38 -- bdev/blockdev.sh@320 -- # waitforlisten 150553 /var/tmp/spdk-nbd.sock 00:48:23.206 19:41:38 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:48:23.206 19:41:38 -- common/autotest_common.sh@817 -- # '[' -z 150553 ']' 00:48:23.206 19:41:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:48:23.206 19:41:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:48:23.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:48:23.206 19:41:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:48:23.206 19:41:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:48:23.206 19:41:38 -- common/autotest_common.sh@10 -- # set +x 00:48:23.206 [2024-04-18 19:41:38.879909] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:23.206 [2024-04-18 19:41:38.880166] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:23.206 [2024-04-18 19:41:39.061432] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:23.464 [2024-04-18 19:41:39.329147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:24.032 19:41:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:48:24.032 19:41:39 -- common/autotest_common.sh@850 -- # return 0 00:48:24.032 19:41:39 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@24 -- # local i 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:48:24.032 19:41:39 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:48:24.290 19:41:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:48:24.290 19:41:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:48:24.290 19:41:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:48:24.290 19:41:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:48:24.290 19:41:40 -- common/autotest_common.sh@855 -- # local i 00:48:24.290 19:41:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:48:24.290 19:41:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:48:24.290 19:41:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:48:24.290 19:41:40 -- common/autotest_common.sh@859 -- # break 00:48:24.290 19:41:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:48:24.290 19:41:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:48:24.290 19:41:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:48:24.290 1+0 records in 00:48:24.290 1+0 records out 00:48:24.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050457 s, 8.1 MB/s 00:48:24.290 19:41:40 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:24.290 19:41:40 -- common/autotest_common.sh@872 -- # size=4096 00:48:24.290 19:41:40 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:24.290 19:41:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:48:24.290 19:41:40 -- common/autotest_common.sh@875 -- # return 0 00:48:24.291 19:41:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:48:24.291 19:41:40 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:48:24.291 19:41:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:48:24.549 19:41:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:48:24.549 19:41:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:48:24.549 19:41:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:48:24.549 19:41:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:48:24.549 19:41:40 -- common/autotest_common.sh@855 -- # local i 00:48:24.549 19:41:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:48:24.549 19:41:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:48:24.549 19:41:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:48:24.549 19:41:40 -- common/autotest_common.sh@859 -- # break 00:48:24.549 19:41:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:48:24.549 19:41:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:48:24.549 19:41:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:48:24.549 1+0 records in 00:48:24.549 1+0 records out 00:48:24.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582329 s, 7.0 MB/s 00:48:24.808 19:41:40 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:24.808 19:41:40 -- common/autotest_common.sh@872 -- # size=4096 00:48:24.808 19:41:40 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:24.808 19:41:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:48:24.808 19:41:40 -- common/autotest_common.sh@875 -- # return 0 00:48:24.808 19:41:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:48:24.808 19:41:40 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:48:24.808 19:41:40 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:48:25.067 { 00:48:25.067 "nbd_device": "/dev/nbd0", 00:48:25.067 "bdev_name": "Nvme0n1p1" 00:48:25.067 }, 00:48:25.067 { 00:48:25.067 "nbd_device": "/dev/nbd1", 00:48:25.067 "bdev_name": "Nvme0n1p2" 00:48:25.067 } 00:48:25.067 ]' 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@119 -- # echo '[ 00:48:25.067 { 00:48:25.067 "nbd_device": "/dev/nbd0", 00:48:25.067 "bdev_name": "Nvme0n1p1" 00:48:25.067 }, 00:48:25.067 { 00:48:25.067 "nbd_device": "/dev/nbd1", 00:48:25.067 "bdev_name": "Nvme0n1p2" 00:48:25.067 } 00:48:25.067 ]' 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@51 -- # local i 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:48:25.067 19:41:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@41 -- # break 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@45 -- # return 0 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:48:25.325 19:41:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@41 -- # break 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@45 -- # return 0 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:25.593 19:41:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@65 -- # true 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@65 -- # count=0 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@122 -- # count=0 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@127 -- # return 0 00:48:25.851 19:41:41 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@12 -- # local i 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:48:25.851 19:41:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:48:26.109 /dev/nbd0 00:48:26.109 19:41:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:48:26.109 19:41:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:48:26.109 19:41:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:48:26.109 19:41:41 -- common/autotest_common.sh@855 -- # local i 00:48:26.110 19:41:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:48:26.110 19:41:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:48:26.110 19:41:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:48:26.110 19:41:41 -- common/autotest_common.sh@859 -- # break 00:48:26.110 19:41:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:48:26.110 19:41:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:48:26.110 19:41:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:48:26.110 1+0 records in 00:48:26.110 1+0 records out 00:48:26.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004421 s, 9.3 MB/s 00:48:26.110 19:41:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:26.110 19:41:41 -- common/autotest_common.sh@872 -- # size=4096 00:48:26.110 19:41:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:26.110 19:41:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:48:26.110 19:41:41 -- common/autotest_common.sh@875 -- # return 0 00:48:26.110 19:41:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:48:26.110 19:41:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:48:26.110 19:41:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:48:26.368 /dev/nbd1 00:48:26.368 19:41:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:48:26.368 19:41:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:48:26.368 19:41:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:48:26.368 19:41:42 -- common/autotest_common.sh@855 -- # local i 00:48:26.368 19:41:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:48:26.368 19:41:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:48:26.368 19:41:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:48:26.368 19:41:42 -- common/autotest_common.sh@859 -- # break 00:48:26.368 19:41:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:48:26.368 19:41:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:48:26.368 19:41:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:48:26.368 1+0 records in 00:48:26.368 1+0 records out 00:48:26.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000812128 s, 5.0 MB/s 00:48:26.368 19:41:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:26.368 19:41:42 -- common/autotest_common.sh@872 -- # size=4096 00:48:26.368 19:41:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:26.368 19:41:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:48:26.368 19:41:42 -- common/autotest_common.sh@875 -- # return 0 00:48:26.368 19:41:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:48:26.368 19:41:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:48:26.368 19:41:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:48:26.368 19:41:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:26.369 19:41:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:48:26.628 { 00:48:26.628 "nbd_device": "/dev/nbd0", 00:48:26.628 "bdev_name": "Nvme0n1p1" 00:48:26.628 }, 00:48:26.628 { 00:48:26.628 "nbd_device": "/dev/nbd1", 00:48:26.628 "bdev_name": "Nvme0n1p2" 00:48:26.628 } 00:48:26.628 ]' 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:48:26.628 { 00:48:26.628 "nbd_device": "/dev/nbd0", 00:48:26.628 "bdev_name": "Nvme0n1p1" 00:48:26.628 }, 00:48:26.628 { 00:48:26.628 "nbd_device": "/dev/nbd1", 00:48:26.628 "bdev_name": "Nvme0n1p2" 00:48:26.628 } 00:48:26.628 ]' 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:48:26.628 /dev/nbd1' 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:48:26.628 /dev/nbd1' 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@65 -- # count=2 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@95 -- # count=2 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:48:26.628 256+0 records in 00:48:26.628 256+0 records out 00:48:26.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454999 s, 230 MB/s 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:48:26.628 19:41:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:48:26.888 256+0 records in 00:48:26.888 256+0 records out 00:48:26.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0685486 s, 15.3 MB/s 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:48:26.888 256+0 records in 00:48:26.888 256+0 records out 00:48:26.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0742085 s, 14.1 MB/s 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@51 -- # local i 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:48:26.888 19:41:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@41 -- # break 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@45 -- # return 0 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:48:27.148 19:41:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@41 -- # break 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@45 -- # return 0 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:27.407 19:41:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@65 -- # true 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@65 -- # count=0 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@104 -- # count=0 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@109 -- # return 0 00:48:27.666 19:41:43 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:48:27.666 19:41:43 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:48:27.925 malloc_lvol_verify 00:48:28.183 19:41:43 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:48:28.442 a90edcda-7990-490f-8f69-05b5a327a01c 00:48:28.442 19:41:44 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:48:28.443 576eae40-9b25-4047-9a81-b48939a7d043 00:48:28.443 19:41:44 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:48:28.710 /dev/nbd0 00:48:28.710 19:41:44 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:48:28.710 mke2fs 1.45.5 (07-Jan-2020) 00:48:28.710 00:48:28.710 Filesystem too small for a journal 00:48:28.710 Creating filesystem with 1024 4k blocks and 1024 inodes 00:48:28.710 00:48:28.710 Allocating group tables: 0/1 done 00:48:28.710 Writing inode tables: 0/1 done 00:48:28.710 Writing superblocks and filesystem accounting information: 0/1 done 00:48:28.710 00:48:28.710 19:41:44 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:48:28.710 19:41:44 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:48:28.710 19:41:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:28.710 19:41:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:48:28.710 19:41:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:48:28.710 19:41:44 -- bdev/nbd_common.sh@51 -- # local i 00:48:28.710 19:41:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:48:28.710 19:41:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@41 -- # break 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@45 -- # return 0 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:48:28.981 19:41:44 -- bdev/nbd_common.sh@147 -- # return 0 00:48:28.981 19:41:44 -- bdev/blockdev.sh@326 -- # killprocess 150553 00:48:28.981 19:41:44 -- common/autotest_common.sh@936 -- # '[' -z 150553 ']' 00:48:28.981 19:41:44 -- common/autotest_common.sh@940 -- # kill -0 150553 00:48:28.981 19:41:44 -- common/autotest_common.sh@941 -- # uname 00:48:28.981 19:41:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:48:29.240 19:41:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150553 00:48:29.240 19:41:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:48:29.240 19:41:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:48:29.240 19:41:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150553' 00:48:29.240 killing process with pid 150553 00:48:29.240 19:41:44 -- common/autotest_common.sh@955 -- # kill 150553 00:48:29.240 19:41:44 -- common/autotest_common.sh@960 -- # wait 150553 00:48:30.615 ************************************ 00:48:30.615 END TEST bdev_nbd 00:48:30.615 ************************************ 00:48:30.615 19:41:46 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:48:30.615 00:48:30.615 real 0m7.499s 00:48:30.616 user 0m10.516s 00:48:30.616 sys 0m1.927s 00:48:30.616 19:41:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:30.616 19:41:46 -- common/autotest_common.sh@10 -- # set +x 00:48:30.616 19:41:46 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:48:30.616 19:41:46 -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:48:30.616 skipping fio tests on NVMe due to multi-ns failures. 00:48:30.616 19:41:46 -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:48:30.616 19:41:46 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:48:30.616 19:41:46 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:48:30.616 19:41:46 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:48:30.616 19:41:46 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:48:30.616 19:41:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:30.616 19:41:46 -- common/autotest_common.sh@10 -- # set +x 00:48:30.616 ************************************ 00:48:30.616 START TEST bdev_verify 00:48:30.616 ************************************ 00:48:30.616 19:41:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:48:30.616 [2024-04-18 19:41:46.459411] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:30.616 [2024-04-18 19:41:46.459557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150831 ] 00:48:30.873 [2024-04-18 19:41:46.624046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:31.131 [2024-04-18 19:41:46.842934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:31.131 [2024-04-18 19:41:46.842938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:31.697 Running I/O for 5 seconds... 00:48:36.993 00:48:36.993 Latency(us) 00:48:36.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:36.993 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:48:36.993 Verification LBA range: start 0x0 length 0x4ff80 00:48:36.993 Nvme0n1p1 : 5.01 4747.93 18.55 0.00 0.00 26869.17 4743.56 34453.21 00:48:36.993 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:48:36.993 Verification LBA range: start 0x4ff80 length 0x4ff80 00:48:36.993 Nvme0n1p1 : 5.02 4670.51 18.24 0.00 0.00 27311.19 4743.56 34702.87 00:48:36.993 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:48:36.993 Verification LBA range: start 0x0 length 0x4ff7f 00:48:36.993 Nvme0n1p2 : 5.03 4762.53 18.60 0.00 0.00 26748.62 2715.06 33704.23 00:48:36.993 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:48:36.993 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:48:36.993 Nvme0n1p2 : 5.02 4676.21 18.27 0.00 0.00 27230.00 2761.87 34453.21 00:48:36.993 =================================================================================================================== 00:48:36.993 Total : 18857.18 73.66 0.00 0.00 27037.61 2715.06 34702.87 00:48:38.369 00:48:38.369 real 0m7.793s 00:48:38.369 user 0m14.292s 00:48:38.369 sys 0m0.242s 00:48:38.369 19:41:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:38.369 ************************************ 00:48:38.369 END TEST bdev_verify 00:48:38.369 ************************************ 00:48:38.369 19:41:54 -- common/autotest_common.sh@10 -- # set +x 00:48:38.369 19:41:54 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:48:38.369 19:41:54 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:48:38.369 19:41:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:38.369 19:41:54 -- common/autotest_common.sh@10 -- # set +x 00:48:38.369 ************************************ 00:48:38.369 START TEST bdev_verify_big_io 00:48:38.369 ************************************ 00:48:38.369 19:41:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:48:38.628 [2024-04-18 19:41:54.352163] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:38.628 [2024-04-18 19:41:54.352562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150943 ] 00:48:38.628 [2024-04-18 19:41:54.535252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:38.887 [2024-04-18 19:41:54.760964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:38.887 [2024-04-18 19:41:54.760964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:39.453 Running I/O for 5 seconds... 00:48:44.750 00:48:44.750 Latency(us) 00:48:44.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:44.750 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:48:44.750 Verification LBA range: start 0x0 length 0x4ff8 00:48:44.750 Nvme0n1p1 : 5.15 425.32 26.58 0.00 0.00 294066.59 2902.31 335544.32 00:48:44.750 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:48:44.750 Verification LBA range: start 0x4ff8 length 0x4ff8 00:48:44.750 Nvme0n1p1 : 5.20 393.57 24.60 0.00 0.00 316136.08 5523.75 383479.22 00:48:44.750 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:48:44.750 Verification LBA range: start 0x0 length 0x4ff7 00:48:44.750 Nvme0n1p2 : 5.31 446.05 27.88 0.00 0.00 272579.08 834.80 305585.01 00:48:44.750 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:48:44.750 Verification LBA range: start 0x4ff7 length 0x4ff7 00:48:44.750 Nvme0n1p2 : 5.30 408.95 25.56 0.00 0.00 296635.70 983.04 385476.51 00:48:44.750 =================================================================================================================== 00:48:44.750 Total : 1673.89 104.62 0.00 0.00 294052.98 834.80 385476.51 00:48:47.279 00:48:47.279 real 0m8.316s 00:48:47.279 user 0m15.225s 00:48:47.279 sys 0m0.293s 00:48:47.279 19:42:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:47.279 ************************************ 00:48:47.279 END TEST bdev_verify_big_io 00:48:47.279 ************************************ 00:48:47.279 19:42:02 -- common/autotest_common.sh@10 -- # set +x 00:48:47.279 19:42:02 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:47.279 19:42:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:48:47.279 19:42:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:47.279 19:42:02 -- common/autotest_common.sh@10 -- # set +x 00:48:47.279 ************************************ 00:48:47.279 START TEST bdev_write_zeroes 00:48:47.279 ************************************ 00:48:47.279 19:42:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:47.279 [2024-04-18 19:42:02.749640] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:47.279 [2024-04-18 19:42:02.749849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151083 ] 00:48:47.279 [2024-04-18 19:42:02.912921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:47.279 [2024-04-18 19:42:03.149701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:47.844 Running I/O for 1 seconds... 00:48:48.777 00:48:48.777 Latency(us) 00:48:48.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:48.777 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:48:48.777 Nvme0n1p1 : 1.01 25975.65 101.47 0.00 0.00 4917.91 2184.53 12295.80 00:48:48.777 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:48:48.777 Nvme0n1p2 : 1.01 25942.26 101.34 0.00 0.00 4916.15 2699.46 12420.63 00:48:48.777 =================================================================================================================== 00:48:48.777 Total : 51917.91 202.80 0.00 0.00 4917.03 2184.53 12420.63 00:48:50.678 00:48:50.678 real 0m3.538s 00:48:50.678 user 0m3.257s 00:48:50.678 sys 0m0.181s 00:48:50.678 19:42:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:50.678 19:42:06 -- common/autotest_common.sh@10 -- # set +x 00:48:50.678 ************************************ 00:48:50.678 END TEST bdev_write_zeroes 00:48:50.678 ************************************ 00:48:50.678 19:42:06 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:50.678 19:42:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:48:50.678 19:42:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:50.678 19:42:06 -- common/autotest_common.sh@10 -- # set +x 00:48:50.678 ************************************ 00:48:50.678 START TEST bdev_json_nonenclosed 00:48:50.678 ************************************ 00:48:50.678 19:42:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:50.678 [2024-04-18 19:42:06.360902] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:50.678 [2024-04-18 19:42:06.361051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151175 ] 00:48:50.678 [2024-04-18 19:42:06.520278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:50.936 [2024-04-18 19:42:06.739003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:50.936 [2024-04-18 19:42:06.739105] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:48:50.936 [2024-04-18 19:42:06.739141] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:50.936 [2024-04-18 19:42:06.739164] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:51.503 00:48:51.503 real 0m0.943s 00:48:51.503 user 0m0.719s 00:48:51.503 sys 0m0.125s 00:48:51.503 19:42:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:51.503 ************************************ 00:48:51.503 END TEST bdev_json_nonenclosed 00:48:51.503 ************************************ 00:48:51.503 19:42:07 -- common/autotest_common.sh@10 -- # set +x 00:48:51.503 19:42:07 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:51.503 19:42:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:48:51.503 19:42:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:51.503 19:42:07 -- common/autotest_common.sh@10 -- # set +x 00:48:51.503 ************************************ 00:48:51.503 START TEST bdev_json_nonarray 00:48:51.503 ************************************ 00:48:51.503 19:42:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:51.503 [2024-04-18 19:42:07.408930] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:51.503 [2024-04-18 19:42:07.409144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151218 ] 00:48:51.760 [2024-04-18 19:42:07.591101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:52.019 [2024-04-18 19:42:07.893018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:52.019 [2024-04-18 19:42:07.893150] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:48:52.019 [2024-04-18 19:42:07.893196] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:52.019 [2024-04-18 19:42:07.893228] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:52.586 00:48:52.586 real 0m1.079s 00:48:52.586 user 0m0.802s 00:48:52.586 sys 0m0.177s 00:48:52.586 19:42:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:52.586 19:42:08 -- common/autotest_common.sh@10 -- # set +x 00:48:52.586 ************************************ 00:48:52.586 END TEST bdev_json_nonarray 00:48:52.586 ************************************ 00:48:52.586 19:42:08 -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:48:52.586 19:42:08 -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:48:52.586 19:42:08 -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:48:52.586 19:42:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:48:52.586 19:42:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:52.586 19:42:08 -- common/autotest_common.sh@10 -- # set +x 00:48:52.586 ************************************ 00:48:52.586 START TEST bdev_gpt_uuid 00:48:52.586 ************************************ 00:48:52.586 19:42:08 -- common/autotest_common.sh@1111 -- # bdev_gpt_uuid 00:48:52.586 19:42:08 -- bdev/blockdev.sh@614 -- # local bdev 00:48:52.586 19:42:08 -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:48:52.586 19:42:08 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=151254 00:48:52.586 19:42:08 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:48:52.586 19:42:08 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:48:52.586 19:42:08 -- bdev/blockdev.sh@49 -- # waitforlisten 151254 00:48:52.586 19:42:08 -- common/autotest_common.sh@817 -- # '[' -z 151254 ']' 00:48:52.586 19:42:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:52.586 19:42:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:48:52.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:52.586 19:42:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:52.586 19:42:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:48:52.586 19:42:08 -- common/autotest_common.sh@10 -- # set +x 00:48:52.844 [2024-04-18 19:42:08.591599] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:52.844 [2024-04-18 19:42:08.591845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151254 ] 00:48:53.103 [2024-04-18 19:42:08.771834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:53.103 [2024-04-18 19:42:09.025349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:54.475 19:42:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:48:54.475 19:42:10 -- common/autotest_common.sh@850 -- # return 0 00:48:54.475 19:42:10 -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:54.475 19:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:54.475 19:42:10 -- common/autotest_common.sh@10 -- # set +x 00:48:54.475 Some configs were skipped because the RPC state that can call them passed over. 00:48:54.475 19:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:54.475 19:42:10 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:48:54.475 19:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:54.475 19:42:10 -- common/autotest_common.sh@10 -- # set +x 00:48:54.475 19:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:54.475 19:42:10 -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:48:54.475 19:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:54.475 19:42:10 -- common/autotest_common.sh@10 -- # set +x 00:48:54.475 19:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:54.475 19:42:10 -- bdev/blockdev.sh@621 -- # bdev='[ 00:48:54.475 { 00:48:54.475 "name": "Nvme0n1p1", 00:48:54.475 "aliases": [ 00:48:54.475 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:48:54.475 ], 00:48:54.475 "product_name": "GPT Disk", 00:48:54.475 "block_size": 4096, 00:48:54.475 "num_blocks": 655104, 00:48:54.475 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:48:54.475 "assigned_rate_limits": { 00:48:54.475 "rw_ios_per_sec": 0, 00:48:54.475 "rw_mbytes_per_sec": 0, 00:48:54.475 "r_mbytes_per_sec": 0, 00:48:54.475 "w_mbytes_per_sec": 0 00:48:54.475 }, 00:48:54.475 "claimed": false, 00:48:54.475 "zoned": false, 00:48:54.475 "supported_io_types": { 00:48:54.475 "read": true, 00:48:54.475 "write": true, 00:48:54.475 "unmap": true, 00:48:54.475 "write_zeroes": true, 00:48:54.475 "flush": true, 00:48:54.475 "reset": true, 00:48:54.475 "compare": true, 00:48:54.475 "compare_and_write": false, 00:48:54.475 "abort": true, 00:48:54.475 "nvme_admin": false, 00:48:54.475 "nvme_io": false 00:48:54.475 }, 00:48:54.475 "driver_specific": { 00:48:54.475 "gpt": { 00:48:54.475 "base_bdev": "Nvme0n1", 00:48:54.475 "offset_blocks": 256, 00:48:54.475 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:48:54.475 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:48:54.475 "partition_name": "SPDK_TEST_first" 00:48:54.475 } 00:48:54.475 } 00:48:54.475 } 00:48:54.475 ]' 00:48:54.475 19:42:10 -- bdev/blockdev.sh@622 -- # jq -r length 00:48:54.475 19:42:10 -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:48:54.475 19:42:10 -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:48:54.475 19:42:10 -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:48:54.475 19:42:10 -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:48:54.475 19:42:10 -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:48:54.475 19:42:10 -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:48:54.475 19:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:48:54.475 19:42:10 -- common/autotest_common.sh@10 -- # set +x 00:48:54.475 19:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:48:54.475 19:42:10 -- bdev/blockdev.sh@626 -- # bdev='[ 00:48:54.475 { 00:48:54.475 "name": "Nvme0n1p2", 00:48:54.475 "aliases": [ 00:48:54.475 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:48:54.475 ], 00:48:54.475 "product_name": "GPT Disk", 00:48:54.475 "block_size": 4096, 00:48:54.475 "num_blocks": 655103, 00:48:54.475 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:48:54.475 "assigned_rate_limits": { 00:48:54.475 "rw_ios_per_sec": 0, 00:48:54.475 "rw_mbytes_per_sec": 0, 00:48:54.475 "r_mbytes_per_sec": 0, 00:48:54.475 "w_mbytes_per_sec": 0 00:48:54.475 }, 00:48:54.475 "claimed": false, 00:48:54.475 "zoned": false, 00:48:54.475 "supported_io_types": { 00:48:54.475 "read": true, 00:48:54.475 "write": true, 00:48:54.475 "unmap": true, 00:48:54.475 "write_zeroes": true, 00:48:54.475 "flush": true, 00:48:54.475 "reset": true, 00:48:54.475 "compare": true, 00:48:54.475 "compare_and_write": false, 00:48:54.475 "abort": true, 00:48:54.475 "nvme_admin": false, 00:48:54.475 "nvme_io": false 00:48:54.475 }, 00:48:54.475 "driver_specific": { 00:48:54.475 "gpt": { 00:48:54.475 "base_bdev": "Nvme0n1", 00:48:54.475 "offset_blocks": 655360, 00:48:54.475 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:48:54.475 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:48:54.475 "partition_name": "SPDK_TEST_second" 00:48:54.475 } 00:48:54.475 } 00:48:54.475 } 00:48:54.475 ]' 00:48:54.475 19:42:10 -- bdev/blockdev.sh@627 -- # jq -r length 00:48:54.733 19:42:10 -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:48:54.733 19:42:10 -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:48:54.733 19:42:10 -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:48:54.733 19:42:10 -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:48:54.733 19:42:10 -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:48:54.733 19:42:10 -- bdev/blockdev.sh@631 -- # killprocess 151254 00:48:54.733 19:42:10 -- common/autotest_common.sh@936 -- # '[' -z 151254 ']' 00:48:54.733 19:42:10 -- common/autotest_common.sh@940 -- # kill -0 151254 00:48:54.733 19:42:10 -- common/autotest_common.sh@941 -- # uname 00:48:54.733 19:42:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:48:54.733 19:42:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151254 00:48:54.733 19:42:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:48:54.733 killing process with pid 151254 00:48:54.733 19:42:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:48:54.733 19:42:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151254' 00:48:54.733 19:42:10 -- common/autotest_common.sh@955 -- # kill 151254 00:48:54.733 19:42:10 -- common/autotest_common.sh@960 -- # wait 151254 00:48:58.014 00:48:58.014 real 0m4.878s 00:48:58.014 user 0m5.150s 00:48:58.014 sys 0m0.527s 00:48:58.014 19:42:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:58.014 19:42:13 -- common/autotest_common.sh@10 -- # set +x 00:48:58.014 ************************************ 00:48:58.014 END TEST bdev_gpt_uuid 00:48:58.014 ************************************ 00:48:58.014 19:42:13 -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:48:58.014 19:42:13 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:48:58.014 19:42:13 -- bdev/blockdev.sh@811 -- # cleanup 00:48:58.015 19:42:13 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:48:58.015 19:42:13 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:58.015 19:42:13 -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:48:58.015 19:42:13 -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:48:58.015 19:42:13 -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:48:58.015 19:42:13 -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:48:58.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:58.015 Waiting for block devices as requested 00:48:58.015 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:48:58.015 19:42:13 -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:48:58.015 19:42:13 -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:48:58.273 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:48:58.273 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:48:58.273 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:48:58.273 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:48:58.273 19:42:13 -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:48:58.273 ************************************ 00:48:58.273 END TEST blockdev_nvme_gpt 00:48:58.273 ************************************ 00:48:58.273 00:48:58.273 real 0m49.935s 00:48:58.273 user 1m9.277s 00:48:58.273 sys 0m6.819s 00:48:58.273 19:42:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:48:58.273 19:42:13 -- common/autotest_common.sh@10 -- # set +x 00:48:58.273 19:42:14 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:48:58.273 19:42:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:48:58.273 19:42:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:48:58.273 19:42:14 -- common/autotest_common.sh@10 -- # set +x 00:48:58.273 ************************************ 00:48:58.273 START TEST nvme 00:48:58.273 ************************************ 00:48:58.273 19:42:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:48:58.273 * Looking for test storage... 00:48:58.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:48:58.273 19:42:14 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:48:58.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:58.839 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:48:59.773 19:42:15 -- nvme/nvme.sh@79 -- # uname 00:48:59.773 19:42:15 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:48:59.773 19:42:15 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:48:59.773 19:42:15 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:48:59.773 19:42:15 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:48:59.773 19:42:15 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:48:59.773 19:42:15 -- common/autotest_common.sh@1055 -- # echo 0 00:48:59.773 19:42:15 -- common/autotest_common.sh@1057 -- # stubpid=151698 00:48:59.773 Waiting for stub to ready for secondary processes... 00:48:59.773 19:42:15 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:48:59.773 19:42:15 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:48:59.773 19:42:15 -- common/autotest_common.sh@1061 -- # [[ -e /proc/151698 ]] 00:48:59.773 19:42:15 -- common/autotest_common.sh@1062 -- # sleep 1s 00:48:59.773 19:42:15 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:48:59.773 [2024-04-18 19:42:15.678595] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:48:59.773 [2024-04-18 19:42:15.678774] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:49:00.757 19:42:16 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:49:00.757 19:42:16 -- common/autotest_common.sh@1061 -- # [[ -e /proc/151698 ]] 00:49:00.757 19:42:16 -- common/autotest_common.sh@1062 -- # sleep 1s 00:49:01.015 [2024-04-18 19:42:16.757433] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:49:01.273 [2024-04-18 19:42:16.975146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:49:01.273 [2024-04-18 19:42:16.975214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:49:01.273 [2024-04-18 19:42:16.975218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:01.273 [2024-04-18 19:42:16.986353] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:49:01.273 [2024-04-18 19:42:16.986434] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:49:01.273 [2024-04-18 19:42:16.993658] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:49:01.273 [2024-04-18 19:42:16.993856] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:49:01.838 19:42:17 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:49:01.838 done. 00:49:01.838 19:42:17 -- common/autotest_common.sh@1064 -- # echo done. 00:49:01.838 19:42:17 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:49:01.838 19:42:17 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:49:01.838 19:42:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:01.838 19:42:17 -- common/autotest_common.sh@10 -- # set +x 00:49:01.838 ************************************ 00:49:01.838 START TEST nvme_reset 00:49:01.838 ************************************ 00:49:01.838 19:42:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:49:02.096 Initializing NVMe Controllers 00:49:02.096 Skipping QEMU NVMe SSD at 0000:00:10.0 00:49:02.096 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:49:02.096 00:49:02.096 real 0m0.334s 00:49:02.096 user 0m0.109s 00:49:02.096 sys 0m0.139s 00:49:02.096 19:42:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:02.096 19:42:17 -- common/autotest_common.sh@10 -- # set +x 00:49:02.096 ************************************ 00:49:02.096 END TEST nvme_reset 00:49:02.096 ************************************ 00:49:02.354 19:42:18 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:49:02.354 19:42:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:02.354 19:42:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:02.354 19:42:18 -- common/autotest_common.sh@10 -- # set +x 00:49:02.354 ************************************ 00:49:02.354 START TEST nvme_identify 00:49:02.354 ************************************ 00:49:02.354 19:42:18 -- common/autotest_common.sh@1111 -- # nvme_identify 00:49:02.354 19:42:18 -- nvme/nvme.sh@12 -- # bdfs=() 00:49:02.354 19:42:18 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:49:02.354 19:42:18 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:49:02.354 19:42:18 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:49:02.354 19:42:18 -- common/autotest_common.sh@1499 -- # bdfs=() 00:49:02.354 19:42:18 -- common/autotest_common.sh@1499 -- # local bdfs 00:49:02.354 19:42:18 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:49:02.354 19:42:18 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:49:02.354 19:42:18 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:49:02.354 19:42:18 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:49:02.354 19:42:18 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:49:02.354 19:42:18 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:49:02.612 [2024-04-18 19:42:18.433150] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 151762 terminated unexpected 00:49:02.612 ===================================================== 00:49:02.612 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:02.612 ===================================================== 00:49:02.612 Controller Capabilities/Features 00:49:02.612 ================================ 00:49:02.612 Vendor ID: 1b36 00:49:02.612 Subsystem Vendor ID: 1af4 00:49:02.612 Serial Number: 12340 00:49:02.612 Model Number: QEMU NVMe Ctrl 00:49:02.612 Firmware Version: 8.0.0 00:49:02.612 Recommended Arb Burst: 6 00:49:02.612 IEEE OUI Identifier: 00 54 52 00:49:02.612 Multi-path I/O 00:49:02.612 May have multiple subsystem ports: No 00:49:02.612 May have multiple controllers: No 00:49:02.612 Associated with SR-IOV VF: No 00:49:02.612 Max Data Transfer Size: 524288 00:49:02.612 Max Number of Namespaces: 256 00:49:02.612 Max Number of I/O Queues: 64 00:49:02.612 NVMe Specification Version (VS): 1.4 00:49:02.612 NVMe Specification Version (Identify): 1.4 00:49:02.612 Maximum Queue Entries: 2048 00:49:02.612 Contiguous Queues Required: Yes 00:49:02.612 Arbitration Mechanisms Supported 00:49:02.612 Weighted Round Robin: Not Supported 00:49:02.612 Vendor Specific: Not Supported 00:49:02.612 Reset Timeout: 7500 ms 00:49:02.612 Doorbell Stride: 4 bytes 00:49:02.612 NVM Subsystem Reset: Not Supported 00:49:02.612 Command Sets Supported 00:49:02.612 NVM Command Set: Supported 00:49:02.612 Boot Partition: Not Supported 00:49:02.612 Memory Page Size Minimum: 4096 bytes 00:49:02.612 Memory Page Size Maximum: 65536 bytes 00:49:02.613 Persistent Memory Region: Not Supported 00:49:02.613 Optional Asynchronous Events Supported 00:49:02.613 Namespace Attribute Notices: Supported 00:49:02.613 Firmware Activation Notices: Not Supported 00:49:02.613 ANA Change Notices: Not Supported 00:49:02.613 PLE Aggregate Log Change Notices: Not Supported 00:49:02.613 LBA Status Info Alert Notices: Not Supported 00:49:02.613 EGE Aggregate Log Change Notices: Not Supported 00:49:02.613 Normal NVM Subsystem Shutdown event: Not Supported 00:49:02.613 Zone Descriptor Change Notices: Not Supported 00:49:02.613 Discovery Log Change Notices: Not Supported 00:49:02.613 Controller Attributes 00:49:02.613 128-bit Host Identifier: Not Supported 00:49:02.613 Non-Operational Permissive Mode: Not Supported 00:49:02.613 NVM Sets: Not Supported 00:49:02.613 Read Recovery Levels: Not Supported 00:49:02.613 Endurance Groups: Not Supported 00:49:02.613 Predictable Latency Mode: Not Supported 00:49:02.613 Traffic Based Keep ALive: Not Supported 00:49:02.613 Namespace Granularity: Not Supported 00:49:02.613 SQ Associations: Not Supported 00:49:02.613 UUID List: Not Supported 00:49:02.613 Multi-Domain Subsystem: Not Supported 00:49:02.613 Fixed Capacity Management: Not Supported 00:49:02.613 Variable Capacity Management: Not Supported 00:49:02.613 Delete Endurance Group: Not Supported 00:49:02.613 Delete NVM Set: Not Supported 00:49:02.613 Extended LBA Formats Supported: Supported 00:49:02.613 Flexible Data Placement Supported: Not Supported 00:49:02.613 00:49:02.613 Controller Memory Buffer Support 00:49:02.613 ================================ 00:49:02.613 Supported: No 00:49:02.613 00:49:02.613 Persistent Memory Region Support 00:49:02.613 ================================ 00:49:02.613 Supported: No 00:49:02.613 00:49:02.613 Admin Command Set Attributes 00:49:02.613 ============================ 00:49:02.613 Security Send/Receive: Not Supported 00:49:02.613 Format NVM: Supported 00:49:02.613 Firmware Activate/Download: Not Supported 00:49:02.613 Namespace Management: Supported 00:49:02.613 Device Self-Test: Not Supported 00:49:02.613 Directives: Supported 00:49:02.613 NVMe-MI: Not Supported 00:49:02.613 Virtualization Management: Not Supported 00:49:02.613 Doorbell Buffer Config: Supported 00:49:02.613 Get LBA Status Capability: Not Supported 00:49:02.613 Command & Feature Lockdown Capability: Not Supported 00:49:02.613 Abort Command Limit: 4 00:49:02.613 Async Event Request Limit: 4 00:49:02.613 Number of Firmware Slots: N/A 00:49:02.613 Firmware Slot 1 Read-Only: N/A 00:49:02.613 Firmware Activation Without Reset: N/A 00:49:02.613 Multiple Update Detection Support: N/A 00:49:02.613 Firmware Update Granularity: No Information Provided 00:49:02.613 Per-Namespace SMART Log: Yes 00:49:02.613 Asymmetric Namespace Access Log Page: Not Supported 00:49:02.613 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:49:02.613 Command Effects Log Page: Supported 00:49:02.613 Get Log Page Extended Data: Supported 00:49:02.613 Telemetry Log Pages: Not Supported 00:49:02.613 Persistent Event Log Pages: Not Supported 00:49:02.613 Supported Log Pages Log Page: May Support 00:49:02.613 Commands Supported & Effects Log Page: Not Supported 00:49:02.613 Feature Identifiers & Effects Log Page:May Support 00:49:02.613 NVMe-MI Commands & Effects Log Page: May Support 00:49:02.613 Data Area 4 for Telemetry Log: Not Supported 00:49:02.613 Error Log Page Entries Supported: 1 00:49:02.613 Keep Alive: Not Supported 00:49:02.613 00:49:02.613 NVM Command Set Attributes 00:49:02.613 ========================== 00:49:02.613 Submission Queue Entry Size 00:49:02.613 Max: 64 00:49:02.613 Min: 64 00:49:02.613 Completion Queue Entry Size 00:49:02.613 Max: 16 00:49:02.613 Min: 16 00:49:02.613 Number of Namespaces: 256 00:49:02.613 Compare Command: Supported 00:49:02.613 Write Uncorrectable Command: Not Supported 00:49:02.613 Dataset Management Command: Supported 00:49:02.613 Write Zeroes Command: Supported 00:49:02.613 Set Features Save Field: Supported 00:49:02.613 Reservations: Not Supported 00:49:02.613 Timestamp: Supported 00:49:02.613 Copy: Supported 00:49:02.613 Volatile Write Cache: Present 00:49:02.613 Atomic Write Unit (Normal): 1 00:49:02.613 Atomic Write Unit (PFail): 1 00:49:02.613 Atomic Compare & Write Unit: 1 00:49:02.613 Fused Compare & Write: Not Supported 00:49:02.613 Scatter-Gather List 00:49:02.613 SGL Command Set: Supported 00:49:02.613 SGL Keyed: Not Supported 00:49:02.613 SGL Bit Bucket Descriptor: Not Supported 00:49:02.613 SGL Metadata Pointer: Not Supported 00:49:02.613 Oversized SGL: Not Supported 00:49:02.613 SGL Metadata Address: Not Supported 00:49:02.613 SGL Offset: Not Supported 00:49:02.613 Transport SGL Data Block: Not Supported 00:49:02.613 Replay Protected Memory Block: Not Supported 00:49:02.613 00:49:02.613 Firmware Slot Information 00:49:02.613 ========================= 00:49:02.613 Active slot: 1 00:49:02.613 Slot 1 Firmware Revision: 1.0 00:49:02.613 00:49:02.613 00:49:02.613 Commands Supported and Effects 00:49:02.613 ============================== 00:49:02.613 Admin Commands 00:49:02.613 -------------- 00:49:02.613 Delete I/O Submission Queue (00h): Supported 00:49:02.613 Create I/O Submission Queue (01h): Supported 00:49:02.613 Get Log Page (02h): Supported 00:49:02.613 Delete I/O Completion Queue (04h): Supported 00:49:02.613 Create I/O Completion Queue (05h): Supported 00:49:02.613 Identify (06h): Supported 00:49:02.613 Abort (08h): Supported 00:49:02.613 Set Features (09h): Supported 00:49:02.613 Get Features (0Ah): Supported 00:49:02.613 Asynchronous Event Request (0Ch): Supported 00:49:02.613 Namespace Attachment (15h): Supported NS-Inventory-Change 00:49:02.613 Directive Send (19h): Supported 00:49:02.613 Directive Receive (1Ah): Supported 00:49:02.613 Virtualization Management (1Ch): Supported 00:49:02.613 Doorbell Buffer Config (7Ch): Supported 00:49:02.613 Format NVM (80h): Supported LBA-Change 00:49:02.613 I/O Commands 00:49:02.613 ------------ 00:49:02.613 Flush (00h): Supported LBA-Change 00:49:02.613 Write (01h): Supported LBA-Change 00:49:02.613 Read (02h): Supported 00:49:02.613 Compare (05h): Supported 00:49:02.613 Write Zeroes (08h): Supported LBA-Change 00:49:02.613 Dataset Management (09h): Supported LBA-Change 00:49:02.613 Unknown (0Ch): Supported 00:49:02.613 Unknown (12h): Supported 00:49:02.613 Copy (19h): Supported LBA-Change 00:49:02.613 Unknown (1Dh): Supported LBA-Change 00:49:02.613 00:49:02.613 Error Log 00:49:02.613 ========= 00:49:02.613 00:49:02.613 Arbitration 00:49:02.613 =========== 00:49:02.613 Arbitration Burst: no limit 00:49:02.613 00:49:02.613 Power Management 00:49:02.613 ================ 00:49:02.613 Number of Power States: 1 00:49:02.613 Current Power State: Power State #0 00:49:02.613 Power State #0: 00:49:02.613 Max Power: 25.00 W 00:49:02.613 Non-Operational State: Operational 00:49:02.613 Entry Latency: 16 microseconds 00:49:02.613 Exit Latency: 4 microseconds 00:49:02.613 Relative Read Throughput: 0 00:49:02.613 Relative Read Latency: 0 00:49:02.613 Relative Write Throughput: 0 00:49:02.613 Relative Write Latency: 0 00:49:02.613 Idle Power: Not Reported 00:49:02.613 Active Power: Not Reported 00:49:02.613 Non-Operational Permissive Mode: Not Supported 00:49:02.613 00:49:02.613 Health Information 00:49:02.613 ================== 00:49:02.613 Critical Warnings: 00:49:02.613 Available Spare Space: OK 00:49:02.613 Temperature: OK 00:49:02.613 Device Reliability: OK 00:49:02.613 Read Only: No 00:49:02.613 Volatile Memory Backup: OK 00:49:02.613 Current Temperature: 323 Kelvin (50 Celsius) 00:49:02.613 Temperature Threshold: 343 Kelvin (70 Celsius) 00:49:02.613 Available Spare: 0% 00:49:02.613 Available Spare Threshold: 0% 00:49:02.613 Life Percentage Used: 0% 00:49:02.613 Data Units Read: 4108 00:49:02.613 Data Units Written: 3776 00:49:02.613 Host Read Commands: 210869 00:49:02.613 Host Write Commands: 223967 00:49:02.613 Controller Busy Time: 0 minutes 00:49:02.613 Power Cycles: 0 00:49:02.613 Power On Hours: 0 hours 00:49:02.613 Unsafe Shutdowns: 0 00:49:02.613 Unrecoverable Media Errors: 0 00:49:02.613 Lifetime Error Log Entries: 0 00:49:02.613 Warning Temperature Time: 0 minutes 00:49:02.613 Critical Temperature Time: 0 minutes 00:49:02.613 00:49:02.613 Number of Queues 00:49:02.613 ================ 00:49:02.613 Number of I/O Submission Queues: 64 00:49:02.613 Number of I/O Completion Queues: 64 00:49:02.613 00:49:02.613 ZNS Specific Controller Data 00:49:02.613 ============================ 00:49:02.613 Zone Append Size Limit: 0 00:49:02.613 00:49:02.613 00:49:02.613 Active Namespaces 00:49:02.613 ================= 00:49:02.613 Namespace ID:1 00:49:02.613 Error Recovery Timeout: Unlimited 00:49:02.613 Command Set Identifier: NVM (00h) 00:49:02.614 Deallocate: Supported 00:49:02.614 Deallocated/Unwritten Error: Supported 00:49:02.614 Deallocated Read Value: All 0x00 00:49:02.614 Deallocate in Write Zeroes: Not Supported 00:49:02.614 Deallocated Guard Field: 0xFFFF 00:49:02.614 Flush: Supported 00:49:02.614 Reservation: Not Supported 00:49:02.614 Namespace Sharing Capabilities: Private 00:49:02.614 Size (in LBAs): 1310720 (5GiB) 00:49:02.614 Capacity (in LBAs): 1310720 (5GiB) 00:49:02.614 Utilization (in LBAs): 1310720 (5GiB) 00:49:02.614 Thin Provisioning: Not Supported 00:49:02.614 Per-NS Atomic Units: No 00:49:02.614 Maximum Single Source Range Length: 128 00:49:02.614 Maximum Copy Length: 128 00:49:02.614 Maximum Source Range Count: 128 00:49:02.614 NGUID/EUI64 Never Reused: No 00:49:02.614 Namespace Write Protected: No 00:49:02.614 Number of LBA Formats: 8 00:49:02.614 Current LBA Format: LBA Format #04 00:49:02.614 LBA Format #00: Data Size: 512 Metadata Size: 0 00:49:02.614 LBA Format #01: Data Size: 512 Metadata Size: 8 00:49:02.614 LBA Format #02: Data Size: 512 Metadata Size: 16 00:49:02.614 LBA Format #03: Data Size: 512 Metadata Size: 64 00:49:02.614 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:49:02.614 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:49:02.614 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:49:02.614 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:49:02.614 00:49:02.614 19:42:18 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:49:02.614 19:42:18 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:49:02.873 ===================================================== 00:49:02.873 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:02.873 ===================================================== 00:49:02.873 Controller Capabilities/Features 00:49:02.873 ================================ 00:49:02.873 Vendor ID: 1b36 00:49:02.873 Subsystem Vendor ID: 1af4 00:49:02.873 Serial Number: 12340 00:49:02.873 Model Number: QEMU NVMe Ctrl 00:49:02.873 Firmware Version: 8.0.0 00:49:02.873 Recommended Arb Burst: 6 00:49:02.873 IEEE OUI Identifier: 00 54 52 00:49:02.873 Multi-path I/O 00:49:02.873 May have multiple subsystem ports: No 00:49:02.873 May have multiple controllers: No 00:49:02.873 Associated with SR-IOV VF: No 00:49:02.873 Max Data Transfer Size: 524288 00:49:02.873 Max Number of Namespaces: 256 00:49:02.873 Max Number of I/O Queues: 64 00:49:02.873 NVMe Specification Version (VS): 1.4 00:49:02.873 NVMe Specification Version (Identify): 1.4 00:49:02.873 Maximum Queue Entries: 2048 00:49:02.873 Contiguous Queues Required: Yes 00:49:02.873 Arbitration Mechanisms Supported 00:49:02.873 Weighted Round Robin: Not Supported 00:49:02.873 Vendor Specific: Not Supported 00:49:02.873 Reset Timeout: 7500 ms 00:49:02.873 Doorbell Stride: 4 bytes 00:49:02.873 NVM Subsystem Reset: Not Supported 00:49:02.873 Command Sets Supported 00:49:02.873 NVM Command Set: Supported 00:49:02.873 Boot Partition: Not Supported 00:49:02.873 Memory Page Size Minimum: 4096 bytes 00:49:02.873 Memory Page Size Maximum: 65536 bytes 00:49:02.873 Persistent Memory Region: Not Supported 00:49:02.873 Optional Asynchronous Events Supported 00:49:02.873 Namespace Attribute Notices: Supported 00:49:02.873 Firmware Activation Notices: Not Supported 00:49:02.873 ANA Change Notices: Not Supported 00:49:02.873 PLE Aggregate Log Change Notices: Not Supported 00:49:02.873 LBA Status Info Alert Notices: Not Supported 00:49:02.873 EGE Aggregate Log Change Notices: Not Supported 00:49:02.873 Normal NVM Subsystem Shutdown event: Not Supported 00:49:02.873 Zone Descriptor Change Notices: Not Supported 00:49:02.873 Discovery Log Change Notices: Not Supported 00:49:02.873 Controller Attributes 00:49:02.873 128-bit Host Identifier: Not Supported 00:49:02.873 Non-Operational Permissive Mode: Not Supported 00:49:02.873 NVM Sets: Not Supported 00:49:02.873 Read Recovery Levels: Not Supported 00:49:02.873 Endurance Groups: Not Supported 00:49:02.873 Predictable Latency Mode: Not Supported 00:49:02.873 Traffic Based Keep ALive: Not Supported 00:49:02.873 Namespace Granularity: Not Supported 00:49:02.873 SQ Associations: Not Supported 00:49:02.873 UUID List: Not Supported 00:49:02.873 Multi-Domain Subsystem: Not Supported 00:49:02.873 Fixed Capacity Management: Not Supported 00:49:02.873 Variable Capacity Management: Not Supported 00:49:02.873 Delete Endurance Group: Not Supported 00:49:02.873 Delete NVM Set: Not Supported 00:49:02.873 Extended LBA Formats Supported: Supported 00:49:02.873 Flexible Data Placement Supported: Not Supported 00:49:02.873 00:49:02.873 Controller Memory Buffer Support 00:49:02.873 ================================ 00:49:02.873 Supported: No 00:49:02.873 00:49:02.873 Persistent Memory Region Support 00:49:02.873 ================================ 00:49:02.873 Supported: No 00:49:02.873 00:49:02.873 Admin Command Set Attributes 00:49:02.873 ============================ 00:49:02.873 Security Send/Receive: Not Supported 00:49:02.873 Format NVM: Supported 00:49:02.873 Firmware Activate/Download: Not Supported 00:49:02.873 Namespace Management: Supported 00:49:02.874 Device Self-Test: Not Supported 00:49:02.874 Directives: Supported 00:49:02.874 NVMe-MI: Not Supported 00:49:02.874 Virtualization Management: Not Supported 00:49:02.874 Doorbell Buffer Config: Supported 00:49:02.874 Get LBA Status Capability: Not Supported 00:49:02.874 Command & Feature Lockdown Capability: Not Supported 00:49:02.874 Abort Command Limit: 4 00:49:02.874 Async Event Request Limit: 4 00:49:02.874 Number of Firmware Slots: N/A 00:49:02.874 Firmware Slot 1 Read-Only: N/A 00:49:02.874 Firmware Activation Without Reset: N/A 00:49:02.874 Multiple Update Detection Support: N/A 00:49:02.874 Firmware Update Granularity: No Information Provided 00:49:02.874 Per-Namespace SMART Log: Yes 00:49:02.874 Asymmetric Namespace Access Log Page: Not Supported 00:49:02.874 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:49:02.874 Command Effects Log Page: Supported 00:49:02.874 Get Log Page Extended Data: Supported 00:49:02.874 Telemetry Log Pages: Not Supported 00:49:02.874 Persistent Event Log Pages: Not Supported 00:49:02.874 Supported Log Pages Log Page: May Support 00:49:02.874 Commands Supported & Effects Log Page: Not Supported 00:49:02.874 Feature Identifiers & Effects Log Page:May Support 00:49:02.874 NVMe-MI Commands & Effects Log Page: May Support 00:49:02.874 Data Area 4 for Telemetry Log: Not Supported 00:49:02.874 Error Log Page Entries Supported: 1 00:49:02.874 Keep Alive: Not Supported 00:49:02.874 00:49:02.874 NVM Command Set Attributes 00:49:02.874 ========================== 00:49:02.874 Submission Queue Entry Size 00:49:02.874 Max: 64 00:49:02.874 Min: 64 00:49:02.874 Completion Queue Entry Size 00:49:02.874 Max: 16 00:49:02.874 Min: 16 00:49:02.874 Number of Namespaces: 256 00:49:02.874 Compare Command: Supported 00:49:02.874 Write Uncorrectable Command: Not Supported 00:49:02.874 Dataset Management Command: Supported 00:49:02.874 Write Zeroes Command: Supported 00:49:02.874 Set Features Save Field: Supported 00:49:02.874 Reservations: Not Supported 00:49:02.874 Timestamp: Supported 00:49:02.874 Copy: Supported 00:49:02.874 Volatile Write Cache: Present 00:49:02.874 Atomic Write Unit (Normal): 1 00:49:02.874 Atomic Write Unit (PFail): 1 00:49:02.874 Atomic Compare & Write Unit: 1 00:49:02.874 Fused Compare & Write: Not Supported 00:49:02.874 Scatter-Gather List 00:49:02.874 SGL Command Set: Supported 00:49:02.874 SGL Keyed: Not Supported 00:49:02.874 SGL Bit Bucket Descriptor: Not Supported 00:49:02.874 SGL Metadata Pointer: Not Supported 00:49:02.874 Oversized SGL: Not Supported 00:49:02.874 SGL Metadata Address: Not Supported 00:49:02.874 SGL Offset: Not Supported 00:49:02.874 Transport SGL Data Block: Not Supported 00:49:02.874 Replay Protected Memory Block: Not Supported 00:49:02.874 00:49:02.874 Firmware Slot Information 00:49:02.874 ========================= 00:49:02.874 Active slot: 1 00:49:02.874 Slot 1 Firmware Revision: 1.0 00:49:02.874 00:49:02.874 00:49:02.874 Commands Supported and Effects 00:49:02.874 ============================== 00:49:02.874 Admin Commands 00:49:02.874 -------------- 00:49:02.874 Delete I/O Submission Queue (00h): Supported 00:49:02.874 Create I/O Submission Queue (01h): Supported 00:49:02.874 Get Log Page (02h): Supported 00:49:02.874 Delete I/O Completion Queue (04h): Supported 00:49:02.874 Create I/O Completion Queue (05h): Supported 00:49:02.874 Identify (06h): Supported 00:49:02.874 Abort (08h): Supported 00:49:02.874 Set Features (09h): Supported 00:49:02.874 Get Features (0Ah): Supported 00:49:02.874 Asynchronous Event Request (0Ch): Supported 00:49:02.874 Namespace Attachment (15h): Supported NS-Inventory-Change 00:49:02.874 Directive Send (19h): Supported 00:49:02.874 Directive Receive (1Ah): Supported 00:49:02.874 Virtualization Management (1Ch): Supported 00:49:02.874 Doorbell Buffer Config (7Ch): Supported 00:49:02.874 Format NVM (80h): Supported LBA-Change 00:49:02.874 I/O Commands 00:49:02.874 ------------ 00:49:02.874 Flush (00h): Supported LBA-Change 00:49:02.874 Write (01h): Supported LBA-Change 00:49:02.874 Read (02h): Supported 00:49:02.874 Compare (05h): Supported 00:49:02.874 Write Zeroes (08h): Supported LBA-Change 00:49:02.874 Dataset Management (09h): Supported LBA-Change 00:49:02.874 Unknown (0Ch): Supported 00:49:02.874 Unknown (12h): Supported 00:49:02.874 Copy (19h): Supported LBA-Change 00:49:02.874 Unknown (1Dh): Supported LBA-Change 00:49:02.874 00:49:02.874 Error Log 00:49:02.874 ========= 00:49:02.874 00:49:02.874 Arbitration 00:49:02.874 =========== 00:49:02.874 Arbitration Burst: no limit 00:49:02.874 00:49:02.874 Power Management 00:49:02.874 ================ 00:49:02.874 Number of Power States: 1 00:49:02.874 Current Power State: Power State #0 00:49:02.874 Power State #0: 00:49:02.874 Max Power: 25.00 W 00:49:02.874 Non-Operational State: Operational 00:49:02.874 Entry Latency: 16 microseconds 00:49:02.874 Exit Latency: 4 microseconds 00:49:02.874 Relative Read Throughput: 0 00:49:02.874 Relative Read Latency: 0 00:49:02.874 Relative Write Throughput: 0 00:49:02.874 Relative Write Latency: 0 00:49:03.132 Idle Power: Not Reported 00:49:03.132 Active Power: Not Reported 00:49:03.132 Non-Operational Permissive Mode: Not Supported 00:49:03.132 00:49:03.132 Health Information 00:49:03.132 ================== 00:49:03.132 Critical Warnings: 00:49:03.132 Available Spare Space: OK 00:49:03.132 Temperature: OK 00:49:03.132 Device Reliability: OK 00:49:03.132 Read Only: No 00:49:03.132 Volatile Memory Backup: OK 00:49:03.132 Current Temperature: 323 Kelvin (50 Celsius) 00:49:03.132 Temperature Threshold: 343 Kelvin (70 Celsius) 00:49:03.132 Available Spare: 0% 00:49:03.132 Available Spare Threshold: 0% 00:49:03.132 Life Percentage Used: 0% 00:49:03.132 Data Units Read: 4108 00:49:03.132 Data Units Written: 3776 00:49:03.132 Host Read Commands: 210869 00:49:03.132 Host Write Commands: 223967 00:49:03.132 Controller Busy Time: 0 minutes 00:49:03.132 Power Cycles: 0 00:49:03.132 Power On Hours: 0 hours 00:49:03.132 Unsafe Shutdowns: 0 00:49:03.132 Unrecoverable Media Errors: 0 00:49:03.132 Lifetime Error Log Entries: 0 00:49:03.132 Warning Temperature Time: 0 minutes 00:49:03.132 Critical Temperature Time: 0 minutes 00:49:03.132 00:49:03.132 Number of Queues 00:49:03.132 ================ 00:49:03.132 Number of I/O Submission Queues: 64 00:49:03.132 Number of I/O Completion Queues: 64 00:49:03.132 00:49:03.132 ZNS Specific Controller Data 00:49:03.132 ============================ 00:49:03.132 Zone Append Size Limit: 0 00:49:03.132 00:49:03.132 00:49:03.132 Active Namespaces 00:49:03.132 ================= 00:49:03.132 Namespace ID:1 00:49:03.132 Error Recovery Timeout: Unlimited 00:49:03.132 Command Set Identifier: NVM (00h) 00:49:03.132 Deallocate: Supported 00:49:03.132 Deallocated/Unwritten Error: Supported 00:49:03.132 Deallocated Read Value: All 0x00 00:49:03.132 Deallocate in Write Zeroes: Not Supported 00:49:03.132 Deallocated Guard Field: 0xFFFF 00:49:03.132 Flush: Supported 00:49:03.132 Reservation: Not Supported 00:49:03.132 Namespace Sharing Capabilities: Private 00:49:03.132 Size (in LBAs): 1310720 (5GiB) 00:49:03.132 Capacity (in LBAs): 1310720 (5GiB) 00:49:03.133 Utilization (in LBAs): 1310720 (5GiB) 00:49:03.133 Thin Provisioning: Not Supported 00:49:03.133 Per-NS Atomic Units: No 00:49:03.133 Maximum Single Source Range Length: 128 00:49:03.133 Maximum Copy Length: 128 00:49:03.133 Maximum Source Range Count: 128 00:49:03.133 NGUID/EUI64 Never Reused: No 00:49:03.133 Namespace Write Protected: No 00:49:03.133 Number of LBA Formats: 8 00:49:03.133 Current LBA Format: LBA Format #04 00:49:03.133 LBA Format #00: Data Size: 512 Metadata Size: 0 00:49:03.133 LBA Format #01: Data Size: 512 Metadata Size: 8 00:49:03.133 LBA Format #02: Data Size: 512 Metadata Size: 16 00:49:03.133 LBA Format #03: Data Size: 512 Metadata Size: 64 00:49:03.133 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:49:03.133 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:49:03.133 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:49:03.133 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:49:03.133 00:49:03.133 00:49:03.133 real 0m0.790s 00:49:03.133 user 0m0.356s 00:49:03.133 sys 0m0.290s 00:49:03.133 19:42:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:03.133 19:42:18 -- common/autotest_common.sh@10 -- # set +x 00:49:03.133 ************************************ 00:49:03.133 END TEST nvme_identify 00:49:03.133 ************************************ 00:49:03.133 19:42:18 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:49:03.133 19:42:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:03.133 19:42:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:03.133 19:42:18 -- common/autotest_common.sh@10 -- # set +x 00:49:03.133 ************************************ 00:49:03.133 START TEST nvme_perf 00:49:03.133 ************************************ 00:49:03.133 19:42:18 -- common/autotest_common.sh@1111 -- # nvme_perf 00:49:03.133 19:42:18 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:49:04.509 Initializing NVMe Controllers 00:49:04.509 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:04.509 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:49:04.509 Initialization complete. Launching workers. 00:49:04.509 ======================================================== 00:49:04.509 Latency(us) 00:49:04.509 Device Information : IOPS MiB/s Average min max 00:49:04.509 PCIE (0000:00:10.0) NSID 1 from core 0: 82111.92 962.25 1557.60 602.28 7047.43 00:49:04.509 ======================================================== 00:49:04.509 Total : 82111.92 962.25 1557.60 602.28 7047.43 00:49:04.509 00:49:04.509 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:49:04.509 ================================================================================= 00:49:04.509 1.00000% : 737.280us 00:49:04.509 10.00000% : 983.040us 00:49:04.509 25.00000% : 1240.503us 00:49:04.509 50.00000% : 1521.371us 00:49:04.509 75.00000% : 1817.844us 00:49:04.509 90.00000% : 2106.514us 00:49:04.509 95.00000% : 2340.571us 00:49:04.509 98.00000% : 2808.686us 00:49:04.509 99.00000% : 3136.366us 00:49:04.509 99.50000% : 3635.688us 00:49:04.509 99.90000% : 5430.126us 00:49:04.509 99.99000% : 6678.430us 00:49:04.509 99.99900% : 7052.922us 00:49:04.509 99.99990% : 7052.922us 00:49:04.509 99.99999% : 7052.922us 00:49:04.509 00:49:04.509 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:49:04.509 ============================================================================== 00:49:04.509 Range in us Cumulative IO count 00:49:04.509 600.747 - 604.648: 0.0037% ( 3) 00:49:04.509 604.648 - 608.549: 0.0049% ( 1) 00:49:04.509 608.549 - 612.450: 0.0061% ( 1) 00:49:04.509 612.450 - 616.350: 0.0122% ( 5) 00:49:04.509 616.350 - 620.251: 0.0195% ( 6) 00:49:04.509 620.251 - 624.152: 0.0256% ( 5) 00:49:04.509 624.152 - 628.053: 0.0280% ( 2) 00:49:04.509 628.053 - 631.954: 0.0390% ( 9) 00:49:04.509 631.954 - 635.855: 0.0585% ( 16) 00:49:04.509 635.855 - 639.756: 0.0682% ( 8) 00:49:04.509 639.756 - 643.657: 0.0828% ( 12) 00:49:04.509 643.657 - 647.558: 0.0950% ( 10) 00:49:04.509 647.558 - 651.459: 0.1120% ( 14) 00:49:04.509 651.459 - 655.360: 0.1315% ( 16) 00:49:04.509 655.360 - 659.261: 0.1510% ( 16) 00:49:04.509 659.261 - 663.162: 0.1742% ( 19) 00:49:04.509 663.162 - 667.063: 0.1985% ( 20) 00:49:04.509 667.063 - 670.964: 0.2253% ( 22) 00:49:04.509 670.964 - 674.865: 0.2484% ( 19) 00:49:04.509 674.865 - 678.766: 0.2728% ( 20) 00:49:04.509 678.766 - 682.667: 0.3118% ( 32) 00:49:04.509 682.667 - 686.568: 0.3507% ( 32) 00:49:04.509 686.568 - 690.469: 0.3824% ( 26) 00:49:04.509 690.469 - 694.370: 0.4214% ( 32) 00:49:04.509 694.370 - 698.270: 0.4652% ( 36) 00:49:04.509 698.270 - 702.171: 0.5127% ( 39) 00:49:04.509 702.171 - 706.072: 0.5578% ( 37) 00:49:04.509 706.072 - 709.973: 0.6065% ( 40) 00:49:04.509 709.973 - 713.874: 0.6674% ( 50) 00:49:04.509 713.874 - 717.775: 0.7344% ( 55) 00:49:04.509 717.775 - 721.676: 0.7794% ( 37) 00:49:04.509 721.676 - 725.577: 0.8513% ( 59) 00:49:04.509 725.577 - 729.478: 0.9170% ( 54) 00:49:04.509 729.478 - 733.379: 0.9938% ( 63) 00:49:04.509 733.379 - 737.280: 1.0595% ( 54) 00:49:04.509 737.280 - 741.181: 1.1241% ( 53) 00:49:04.510 741.181 - 745.082: 1.2118% ( 72) 00:49:04.510 745.082 - 748.983: 1.3055% ( 77) 00:49:04.510 748.983 - 752.884: 1.3908% ( 70) 00:49:04.510 752.884 - 756.785: 1.4882% ( 80) 00:49:04.510 756.785 - 760.686: 1.5686% ( 66) 00:49:04.510 760.686 - 764.587: 1.6685% ( 82) 00:49:04.510 764.587 - 768.488: 1.7829% ( 94) 00:49:04.510 768.488 - 772.389: 1.8670% ( 69) 00:49:04.510 772.389 - 776.290: 1.9741% ( 88) 00:49:04.510 776.290 - 780.190: 2.0923% ( 97) 00:49:04.510 780.190 - 784.091: 2.1958% ( 85) 00:49:04.510 784.091 - 787.992: 2.3066% ( 91) 00:49:04.510 787.992 - 791.893: 2.4247% ( 97) 00:49:04.510 791.893 - 795.794: 2.5441% ( 98) 00:49:04.510 795.794 - 799.695: 2.6561% ( 92) 00:49:04.510 799.695 - 803.596: 2.7864% ( 107) 00:49:04.510 803.596 - 807.497: 2.9082% ( 100) 00:49:04.510 807.497 - 811.398: 3.0154% ( 88) 00:49:04.510 811.398 - 815.299: 3.1579% ( 117) 00:49:04.510 815.299 - 819.200: 3.2784% ( 99) 00:49:04.510 819.200 - 823.101: 3.3954% ( 96) 00:49:04.510 823.101 - 827.002: 3.5366% ( 116) 00:49:04.510 827.002 - 830.903: 3.6645% ( 105) 00:49:04.510 830.903 - 834.804: 3.7778% ( 93) 00:49:04.510 834.804 - 838.705: 3.9166% ( 114) 00:49:04.510 838.705 - 842.606: 4.0567% ( 115) 00:49:04.510 842.606 - 846.507: 4.1772% ( 99) 00:49:04.510 846.507 - 850.408: 4.3295% ( 125) 00:49:04.510 850.408 - 854.309: 4.4622% ( 109) 00:49:04.510 854.309 - 858.210: 4.5949% ( 109) 00:49:04.510 858.210 - 862.110: 4.7460% ( 124) 00:49:04.510 862.110 - 866.011: 4.9104% ( 135) 00:49:04.510 866.011 - 869.912: 5.0455% ( 111) 00:49:04.510 869.912 - 873.813: 5.1868% ( 116) 00:49:04.510 873.813 - 877.714: 5.3549% ( 138) 00:49:04.510 877.714 - 881.615: 5.5266% ( 141) 00:49:04.510 881.615 - 885.516: 5.6679% ( 116) 00:49:04.510 885.516 - 889.417: 5.8408% ( 142) 00:49:04.510 889.417 - 893.318: 6.0016% ( 132) 00:49:04.510 893.318 - 897.219: 6.1842% ( 150) 00:49:04.510 897.219 - 901.120: 6.3511% ( 137) 00:49:04.510 901.120 - 905.021: 6.5021% ( 124) 00:49:04.510 905.021 - 908.922: 6.6677% ( 136) 00:49:04.510 908.922 - 912.823: 6.8419% ( 143) 00:49:04.510 912.823 - 916.724: 7.0099% ( 138) 00:49:04.510 916.724 - 920.625: 7.1804% ( 140) 00:49:04.510 920.625 - 924.526: 7.3522% ( 141) 00:49:04.510 924.526 - 928.427: 7.5312% ( 147) 00:49:04.510 928.427 - 932.328: 7.6968% ( 136) 00:49:04.510 932.328 - 936.229: 7.8685% ( 141) 00:49:04.510 936.229 - 940.130: 8.0622% ( 159) 00:49:04.510 940.130 - 944.030: 8.2254% ( 134) 00:49:04.510 944.030 - 947.931: 8.4178% ( 158) 00:49:04.510 947.931 - 951.832: 8.5676% ( 123) 00:49:04.510 951.832 - 955.733: 8.7527% ( 152) 00:49:04.510 955.733 - 959.634: 8.9390% ( 153) 00:49:04.510 959.634 - 963.535: 9.1266% ( 154) 00:49:04.510 963.535 - 967.436: 9.3141% ( 154) 00:49:04.510 967.436 - 971.337: 9.4883% ( 143) 00:49:04.510 971.337 - 975.238: 9.6551% ( 137) 00:49:04.510 975.238 - 979.139: 9.8439% ( 155) 00:49:04.510 979.139 - 983.040: 10.0229% ( 147) 00:49:04.510 983.040 - 986.941: 10.2141% ( 157) 00:49:04.510 986.941 - 990.842: 10.4004% ( 153) 00:49:04.510 990.842 - 994.743: 10.5928% ( 158) 00:49:04.510 994.743 - 998.644: 10.7767% ( 151) 00:49:04.510 998.644 - 1006.446: 11.1531% ( 309) 00:49:04.510 1006.446 - 1014.248: 11.5586% ( 333) 00:49:04.510 1014.248 - 1022.050: 11.9337% ( 308) 00:49:04.510 1022.050 - 1029.851: 12.3368% ( 331) 00:49:04.510 1029.851 - 1037.653: 12.7290% ( 322) 00:49:04.510 1037.653 - 1045.455: 13.1284% ( 328) 00:49:04.510 1045.455 - 1053.257: 13.5218% ( 323) 00:49:04.510 1053.257 - 1061.059: 13.9492% ( 351) 00:49:04.510 1061.059 - 1068.861: 14.3426% ( 323) 00:49:04.510 1068.861 - 1076.663: 14.7725% ( 353) 00:49:04.510 1076.663 - 1084.465: 15.1841% ( 338) 00:49:04.510 1084.465 - 1092.267: 15.6226% ( 360) 00:49:04.510 1092.267 - 1100.069: 16.0586% ( 358) 00:49:04.510 1100.069 - 1107.870: 16.4872% ( 352) 00:49:04.510 1107.870 - 1115.672: 16.9318% ( 365) 00:49:04.510 1115.672 - 1123.474: 17.3750% ( 364) 00:49:04.510 1123.474 - 1131.276: 17.8342% ( 377) 00:49:04.510 1131.276 - 1139.078: 18.2677% ( 356) 00:49:04.510 1139.078 - 1146.880: 18.7658% ( 409) 00:49:04.510 1146.880 - 1154.682: 19.2311% ( 382) 00:49:04.510 1154.682 - 1162.484: 19.7438% ( 421) 00:49:04.510 1162.484 - 1170.286: 20.2516% ( 417) 00:49:04.510 1170.286 - 1178.088: 20.7899% ( 442) 00:49:04.510 1178.088 - 1185.890: 21.3148% ( 431) 00:49:04.510 1185.890 - 1193.691: 21.9115% ( 490) 00:49:04.510 1193.691 - 1201.493: 22.4596% ( 450) 00:49:04.510 1201.493 - 1209.295: 23.0843% ( 513) 00:49:04.510 1209.295 - 1217.097: 23.6616% ( 474) 00:49:04.510 1217.097 - 1224.899: 24.3204% ( 541) 00:49:04.510 1224.899 - 1232.701: 24.9342% ( 504) 00:49:04.510 1232.701 - 1240.503: 25.6004% ( 547) 00:49:04.510 1240.503 - 1248.305: 26.2227% ( 511) 00:49:04.510 1248.305 - 1256.107: 26.8950% ( 552) 00:49:04.510 1256.107 - 1263.909: 27.5563% ( 543) 00:49:04.510 1263.909 - 1271.710: 28.2139% ( 540) 00:49:04.510 1271.710 - 1279.512: 28.9142% ( 575) 00:49:04.510 1279.512 - 1287.314: 29.5828% ( 549) 00:49:04.510 1287.314 - 1295.116: 30.2769% ( 570) 00:49:04.510 1295.116 - 1302.918: 30.9285% ( 535) 00:49:04.510 1302.918 - 1310.720: 31.6531% ( 595) 00:49:04.510 1310.720 - 1318.522: 32.3120% ( 541) 00:49:04.510 1318.522 - 1326.324: 33.0256% ( 586) 00:49:04.510 1326.324 - 1334.126: 33.6930% ( 548) 00:49:04.510 1334.126 - 1341.928: 34.3835% ( 567) 00:49:04.510 1341.928 - 1349.730: 35.0740% ( 567) 00:49:04.510 1349.730 - 1357.531: 35.7633% ( 566) 00:49:04.510 1357.531 - 1365.333: 36.4527% ( 566) 00:49:04.510 1365.333 - 1373.135: 37.1590% ( 580) 00:49:04.510 1373.135 - 1380.937: 37.8544% ( 571) 00:49:04.510 1380.937 - 1388.739: 38.5766% ( 593) 00:49:04.510 1388.739 - 1396.541: 39.2756% ( 574) 00:49:04.510 1396.541 - 1404.343: 39.9515% ( 555) 00:49:04.510 1404.343 - 1412.145: 40.6859% ( 603) 00:49:04.510 1412.145 - 1419.947: 41.3764% ( 567) 00:49:04.510 1419.947 - 1427.749: 42.1035% ( 597) 00:49:04.510 1427.749 - 1435.550: 42.7928% ( 566) 00:49:04.510 1435.550 - 1443.352: 43.5381% ( 612) 00:49:04.510 1443.352 - 1451.154: 44.2323% ( 570) 00:49:04.510 1451.154 - 1458.956: 44.9630% ( 600) 00:49:04.510 1458.956 - 1466.758: 45.6584% ( 571) 00:49:04.510 1466.758 - 1474.560: 46.4122% ( 619) 00:49:04.510 1474.560 - 1482.362: 47.1088% ( 572) 00:49:04.510 1482.362 - 1490.164: 47.8542% ( 612) 00:49:04.510 1490.164 - 1497.966: 48.5605% ( 580) 00:49:04.510 1497.966 - 1505.768: 49.2839% ( 594) 00:49:04.510 1505.768 - 1513.570: 49.9756% ( 568) 00:49:04.510 1513.570 - 1521.371: 50.6930% ( 589) 00:49:04.510 1521.371 - 1529.173: 51.4078% ( 587) 00:49:04.510 1529.173 - 1536.975: 52.1373% ( 599) 00:49:04.510 1536.975 - 1544.777: 52.8510% ( 586) 00:49:04.510 1544.777 - 1552.579: 53.5622% ( 584) 00:49:04.510 1552.579 - 1560.381: 54.2637% ( 576) 00:49:04.510 1560.381 - 1568.183: 54.9981% ( 603) 00:49:04.510 1568.183 - 1575.985: 55.6922% ( 570) 00:49:04.510 1575.985 - 1583.787: 56.4181% ( 596) 00:49:04.510 1583.787 - 1591.589: 57.1232% ( 579) 00:49:04.510 1591.589 - 1599.390: 57.8369% ( 586) 00:49:04.510 1599.390 - 1607.192: 58.5359% ( 574) 00:49:04.510 1607.192 - 1614.994: 59.2496% ( 586) 00:49:04.510 1614.994 - 1622.796: 59.9510% ( 576) 00:49:04.510 1622.796 - 1630.598: 60.6720% ( 592) 00:49:04.510 1630.598 - 1638.400: 61.3357% ( 545) 00:49:04.510 1638.400 - 1646.202: 62.0652% ( 599) 00:49:04.510 1646.202 - 1654.004: 62.7192% ( 537) 00:49:04.510 1654.004 - 1661.806: 63.4243% ( 579) 00:49:04.510 1661.806 - 1669.608: 64.0698% ( 530) 00:49:04.510 1669.608 - 1677.410: 64.7835% ( 586) 00:49:04.510 1677.410 - 1685.211: 65.4326% ( 533) 00:49:04.510 1685.211 - 1693.013: 66.1194% ( 564) 00:49:04.510 1693.013 - 1700.815: 66.7795% ( 542) 00:49:04.510 1700.815 - 1708.617: 67.4262% ( 531) 00:49:04.510 1708.617 - 1716.419: 68.0899% ( 545) 00:49:04.510 1716.419 - 1724.221: 68.7001% ( 501) 00:49:04.510 1724.221 - 1732.023: 69.3674% ( 548) 00:49:04.510 1732.023 - 1739.825: 69.9471% ( 476) 00:49:04.510 1739.825 - 1747.627: 70.5902% ( 528) 00:49:04.510 1747.627 - 1755.429: 71.1613% ( 469) 00:49:04.510 1755.429 - 1763.230: 71.7520% ( 485) 00:49:04.510 1763.230 - 1771.032: 72.3268% ( 472) 00:49:04.510 1771.032 - 1778.834: 72.8700% ( 446) 00:49:04.510 1778.834 - 1786.636: 73.4241% ( 455) 00:49:04.510 1786.636 - 1794.438: 73.9502% ( 432) 00:49:04.510 1794.438 - 1802.240: 74.4934% ( 446) 00:49:04.510 1802.240 - 1810.042: 74.9915% ( 409) 00:49:04.510 1810.042 - 1817.844: 75.5103% ( 426) 00:49:04.510 1817.844 - 1825.646: 75.9889% ( 393) 00:49:04.510 1825.646 - 1833.448: 76.4931% ( 414) 00:49:04.510 1833.448 - 1841.250: 76.9595% ( 383) 00:49:04.510 1841.250 - 1849.051: 77.4503% ( 403) 00:49:04.510 1849.051 - 1856.853: 77.8851% ( 357) 00:49:04.510 1856.853 - 1864.655: 78.3698% ( 398) 00:49:04.510 1864.655 - 1872.457: 78.8204% ( 370) 00:49:04.510 1872.457 - 1880.259: 79.2783% ( 376) 00:49:04.510 1880.259 - 1888.061: 79.7167% ( 360) 00:49:04.510 1888.061 - 1895.863: 80.1698% ( 372) 00:49:04.510 1895.863 - 1903.665: 80.6167% ( 367) 00:49:04.510 1903.665 - 1911.467: 81.0381% ( 346) 00:49:04.511 1911.467 - 1919.269: 81.4704% ( 355) 00:49:04.511 1919.269 - 1927.070: 81.8760% ( 333) 00:49:04.511 1927.070 - 1934.872: 82.3095% ( 356) 00:49:04.511 1934.872 - 1942.674: 82.7029% ( 323) 00:49:04.511 1942.674 - 1950.476: 83.1194% ( 342) 00:49:04.511 1950.476 - 1958.278: 83.5189% ( 328) 00:49:04.511 1958.278 - 1966.080: 83.9086% ( 320) 00:49:04.511 1966.080 - 1973.882: 84.3141% ( 333) 00:49:04.511 1973.882 - 1981.684: 84.6904% ( 309) 00:49:04.511 1981.684 - 1989.486: 85.0887% ( 327) 00:49:04.511 1989.486 - 1997.288: 85.4613% ( 306) 00:49:04.511 1997.288 - 2012.891: 86.2066% ( 612) 00:49:04.511 2012.891 - 2028.495: 86.9459% ( 607) 00:49:04.511 2028.495 - 2044.099: 87.6583% ( 585) 00:49:04.511 2044.099 - 2059.703: 88.3598% ( 576) 00:49:04.511 2059.703 - 2075.307: 89.0625% ( 577) 00:49:04.511 2075.307 - 2090.910: 89.7055% ( 528) 00:49:04.511 2090.910 - 2106.514: 90.3011% ( 489) 00:49:04.511 2106.514 - 2122.118: 90.8515% ( 452) 00:49:04.511 2122.118 - 2137.722: 91.3630% ( 420) 00:49:04.511 2137.722 - 2153.326: 91.8221% ( 377) 00:49:04.511 2153.326 - 2168.930: 92.2301% ( 335) 00:49:04.511 2168.930 - 2184.533: 92.6028% ( 306) 00:49:04.511 2184.533 - 2200.137: 92.9474% ( 283) 00:49:04.511 2200.137 - 2215.741: 93.2641% ( 260) 00:49:04.511 2215.741 - 2231.345: 93.5625% ( 245) 00:49:04.511 2231.345 - 2246.949: 93.8401% ( 228) 00:49:04.511 2246.949 - 2262.552: 94.0995% ( 213) 00:49:04.511 2262.552 - 2278.156: 94.3358% ( 194) 00:49:04.511 2278.156 - 2293.760: 94.5623% ( 186) 00:49:04.511 2293.760 - 2309.364: 94.7742% ( 174) 00:49:04.511 2309.364 - 2324.968: 94.9691% ( 160) 00:49:04.511 2324.968 - 2340.571: 95.1554% ( 153) 00:49:04.511 2340.571 - 2356.175: 95.3295% ( 143) 00:49:04.511 2356.175 - 2371.779: 95.4842% ( 127) 00:49:04.511 2371.779 - 2387.383: 95.6352% ( 124) 00:49:04.511 2387.383 - 2402.987: 95.7741% ( 114) 00:49:04.511 2402.987 - 2418.590: 95.9129% ( 114) 00:49:04.511 2418.590 - 2434.194: 96.0444% ( 108) 00:49:04.511 2434.194 - 2449.798: 96.1747% ( 107) 00:49:04.511 2449.798 - 2465.402: 96.2929% ( 97) 00:49:04.511 2465.402 - 2481.006: 96.4159% ( 101) 00:49:04.511 2481.006 - 2496.610: 96.5279% ( 92) 00:49:04.511 2496.610 - 2512.213: 96.6266% ( 81) 00:49:04.511 2512.213 - 2527.817: 96.7130% ( 71) 00:49:04.511 2527.817 - 2543.421: 96.8044% ( 75) 00:49:04.511 2543.421 - 2559.025: 96.8945% ( 74) 00:49:04.511 2559.025 - 2574.629: 96.9761% ( 67) 00:49:04.511 2574.629 - 2590.232: 97.0625% ( 71) 00:49:04.511 2590.232 - 2605.836: 97.1356% ( 60) 00:49:04.511 2605.836 - 2621.440: 97.2111% ( 62) 00:49:04.511 2621.440 - 2637.044: 97.2854% ( 61) 00:49:04.511 2637.044 - 2652.648: 97.3524% ( 55) 00:49:04.511 2652.648 - 2668.251: 97.4206% ( 56) 00:49:04.511 2668.251 - 2683.855: 97.4864% ( 54) 00:49:04.511 2683.855 - 2699.459: 97.5473% ( 50) 00:49:04.511 2699.459 - 2715.063: 97.6191% ( 59) 00:49:04.511 2715.063 - 2730.667: 97.6800% ( 50) 00:49:04.511 2730.667 - 2746.270: 97.7433% ( 52) 00:49:04.511 2746.270 - 2761.874: 97.8115% ( 56) 00:49:04.511 2761.874 - 2777.478: 97.8736% ( 51) 00:49:04.511 2777.478 - 2793.082: 97.9394% ( 54) 00:49:04.511 2793.082 - 2808.686: 98.0015% ( 51) 00:49:04.511 2808.686 - 2824.290: 98.0587% ( 47) 00:49:04.511 2824.290 - 2839.893: 98.1160% ( 47) 00:49:04.511 2839.893 - 2855.497: 98.1757% ( 49) 00:49:04.511 2855.497 - 2871.101: 98.2305% ( 45) 00:49:04.511 2871.101 - 2886.705: 98.2780% ( 39) 00:49:04.511 2886.705 - 2902.309: 98.3230% ( 37) 00:49:04.511 2902.309 - 2917.912: 98.3693% ( 38) 00:49:04.511 2917.912 - 2933.516: 98.4192% ( 41) 00:49:04.511 2933.516 - 2949.120: 98.4679% ( 40) 00:49:04.511 2949.120 - 2964.724: 98.5142% ( 38) 00:49:04.511 2964.724 - 2980.328: 98.5629% ( 40) 00:49:04.511 2980.328 - 2995.931: 98.6141% ( 42) 00:49:04.511 2995.931 - 3011.535: 98.6665% ( 43) 00:49:04.511 3011.535 - 3027.139: 98.7103% ( 36) 00:49:04.511 3027.139 - 3042.743: 98.7590% ( 40) 00:49:04.511 3042.743 - 3058.347: 98.8065% ( 39) 00:49:04.511 3058.347 - 3073.950: 98.8504% ( 36) 00:49:04.511 3073.950 - 3089.554: 98.8918% ( 34) 00:49:04.511 3089.554 - 3105.158: 98.9332% ( 34) 00:49:04.511 3105.158 - 3120.762: 98.9685% ( 29) 00:49:04.511 3120.762 - 3136.366: 99.0026% ( 28) 00:49:04.511 3136.366 - 3151.970: 99.0355% ( 27) 00:49:04.511 3151.970 - 3167.573: 99.0659% ( 25) 00:49:04.511 3167.573 - 3183.177: 99.1000% ( 28) 00:49:04.511 3183.177 - 3198.781: 99.1292% ( 24) 00:49:04.511 3198.781 - 3214.385: 99.1572% ( 23) 00:49:04.511 3214.385 - 3229.989: 99.1877% ( 25) 00:49:04.511 3229.989 - 3245.592: 99.2096% ( 18) 00:49:04.511 3245.592 - 3261.196: 99.2364% ( 22) 00:49:04.511 3261.196 - 3276.800: 99.2571% ( 17) 00:49:04.511 3276.800 - 3292.404: 99.2790% ( 18) 00:49:04.511 3292.404 - 3308.008: 99.2997% ( 17) 00:49:04.511 3308.008 - 3323.611: 99.3192% ( 16) 00:49:04.511 3323.611 - 3339.215: 99.3387% ( 16) 00:49:04.511 3339.215 - 3354.819: 99.3618% ( 19) 00:49:04.511 3354.819 - 3370.423: 99.3765% ( 12) 00:49:04.511 3370.423 - 3386.027: 99.3923% ( 13) 00:49:04.511 3386.027 - 3401.630: 99.4069% ( 12) 00:49:04.511 3401.630 - 3417.234: 99.4203% ( 11) 00:49:04.511 3417.234 - 3432.838: 99.4361% ( 13) 00:49:04.511 3432.838 - 3448.442: 99.4459% ( 8) 00:49:04.511 3448.442 - 3464.046: 99.4581% ( 10) 00:49:04.511 3464.046 - 3479.650: 99.4666% ( 7) 00:49:04.511 3479.650 - 3495.253: 99.4715% ( 4) 00:49:04.511 3495.253 - 3510.857: 99.4751% ( 3) 00:49:04.511 3510.857 - 3526.461: 99.4800% ( 4) 00:49:04.511 3526.461 - 3542.065: 99.4824% ( 2) 00:49:04.511 3542.065 - 3557.669: 99.4848% ( 2) 00:49:04.511 3557.669 - 3573.272: 99.4897% ( 4) 00:49:04.511 3573.272 - 3588.876: 99.4922% ( 2) 00:49:04.511 3588.876 - 3604.480: 99.4970% ( 4) 00:49:04.511 3604.480 - 3620.084: 99.4995% ( 2) 00:49:04.511 3620.084 - 3635.688: 99.5043% ( 4) 00:49:04.511 3635.688 - 3651.291: 99.5080% ( 3) 00:49:04.511 3651.291 - 3666.895: 99.5104% ( 2) 00:49:04.511 3666.895 - 3682.499: 99.5153% ( 4) 00:49:04.511 3682.499 - 3698.103: 99.5202% ( 4) 00:49:04.511 3698.103 - 3713.707: 99.5238% ( 3) 00:49:04.511 3713.707 - 3729.310: 99.5275% ( 3) 00:49:04.511 3729.310 - 3744.914: 99.5311% ( 3) 00:49:04.511 3744.914 - 3760.518: 99.5348% ( 3) 00:49:04.511 3760.518 - 3776.122: 99.5409% ( 5) 00:49:04.511 3776.122 - 3791.726: 99.5433% ( 2) 00:49:04.511 3791.726 - 3807.330: 99.5482% ( 4) 00:49:04.511 3807.330 - 3822.933: 99.5543% ( 5) 00:49:04.511 3822.933 - 3838.537: 99.5591% ( 4) 00:49:04.511 3838.537 - 3854.141: 99.5664% ( 6) 00:49:04.511 3854.141 - 3869.745: 99.5738% ( 6) 00:49:04.511 3869.745 - 3885.349: 99.5798% ( 5) 00:49:04.511 3885.349 - 3900.952: 99.5847% ( 4) 00:49:04.511 3900.952 - 3916.556: 99.5884% ( 3) 00:49:04.511 3916.556 - 3932.160: 99.5908% ( 2) 00:49:04.511 3932.160 - 3947.764: 99.5957% ( 4) 00:49:04.511 3947.764 - 3963.368: 99.5993% ( 3) 00:49:04.511 3963.368 - 3978.971: 99.6030% ( 3) 00:49:04.511 3978.971 - 3994.575: 99.6042% ( 1) 00:49:04.511 3994.575 - 4025.783: 99.6103% ( 5) 00:49:04.511 4025.783 - 4056.990: 99.6164% ( 5) 00:49:04.511 4056.990 - 4088.198: 99.6225% ( 5) 00:49:04.511 4088.198 - 4119.406: 99.6286% ( 5) 00:49:04.511 4119.406 - 4150.613: 99.6346% ( 5) 00:49:04.511 4150.613 - 4181.821: 99.6395% ( 4) 00:49:04.511 4181.821 - 4213.029: 99.6444% ( 4) 00:49:04.511 4213.029 - 4244.236: 99.6529% ( 7) 00:49:04.511 4244.236 - 4275.444: 99.6614% ( 7) 00:49:04.511 4275.444 - 4306.651: 99.6687% ( 6) 00:49:04.511 4306.651 - 4337.859: 99.6748% ( 5) 00:49:04.511 4337.859 - 4369.067: 99.6834% ( 7) 00:49:04.511 4369.067 - 4400.274: 99.6919% ( 7) 00:49:04.511 4400.274 - 4431.482: 99.6992% ( 6) 00:49:04.511 4431.482 - 4462.690: 99.7053% ( 5) 00:49:04.511 4462.690 - 4493.897: 99.7114% ( 5) 00:49:04.511 4493.897 - 4525.105: 99.7150% ( 3) 00:49:04.511 4525.105 - 4556.312: 99.7199% ( 4) 00:49:04.511 4556.312 - 4587.520: 99.7260% ( 5) 00:49:04.511 4587.520 - 4618.728: 99.7345% ( 7) 00:49:04.511 4618.728 - 4649.935: 99.7418% ( 6) 00:49:04.511 4649.935 - 4681.143: 99.7516% ( 8) 00:49:04.511 4681.143 - 4712.350: 99.7625% ( 9) 00:49:04.511 4712.350 - 4743.558: 99.7735% ( 9) 00:49:04.511 4743.558 - 4774.766: 99.7820% ( 7) 00:49:04.511 4774.766 - 4805.973: 99.7905% ( 7) 00:49:04.511 4805.973 - 4837.181: 99.7978% ( 6) 00:49:04.511 4837.181 - 4868.389: 99.8051% ( 6) 00:49:04.511 4868.389 - 4899.596: 99.8112% ( 5) 00:49:04.511 4899.596 - 4930.804: 99.8173% ( 5) 00:49:04.511 4930.804 - 4962.011: 99.8222% ( 4) 00:49:04.511 4962.011 - 4993.219: 99.8271% ( 4) 00:49:04.511 4993.219 - 5024.427: 99.8332% ( 5) 00:49:04.511 5024.427 - 5055.634: 99.8392% ( 5) 00:49:04.511 5055.634 - 5086.842: 99.8453% ( 5) 00:49:04.511 5086.842 - 5118.050: 99.8490% ( 3) 00:49:04.511 5118.050 - 5149.257: 99.8539% ( 4) 00:49:04.511 5149.257 - 5180.465: 99.8599% ( 5) 00:49:04.511 5180.465 - 5211.672: 99.8660% ( 5) 00:49:04.511 5211.672 - 5242.880: 99.8721% ( 5) 00:49:04.511 5242.880 - 5274.088: 99.8782% ( 5) 00:49:04.511 5274.088 - 5305.295: 99.8819% ( 3) 00:49:04.511 5305.295 - 5336.503: 99.8867% ( 4) 00:49:04.511 5336.503 - 5367.710: 99.8928% ( 5) 00:49:04.511 5367.710 - 5398.918: 99.8989% ( 5) 00:49:04.511 5398.918 - 5430.126: 99.9050% ( 5) 00:49:04.511 5430.126 - 5461.333: 99.9111% ( 5) 00:49:04.511 5461.333 - 5492.541: 99.9148% ( 3) 00:49:04.511 5492.541 - 5523.749: 99.9196% ( 4) 00:49:04.512 5523.749 - 5554.956: 99.9257% ( 5) 00:49:04.512 5554.956 - 5586.164: 99.9306% ( 4) 00:49:04.512 5586.164 - 5617.371: 99.9367% ( 5) 00:49:04.512 5617.371 - 5648.579: 99.9415% ( 4) 00:49:04.512 5648.579 - 5679.787: 99.9464% ( 4) 00:49:04.512 5679.787 - 5710.994: 99.9537% ( 6) 00:49:04.512 5710.994 - 5742.202: 99.9586% ( 4) 00:49:04.512 5773.410 - 5804.617: 99.9598% ( 1) 00:49:04.512 5804.617 - 5835.825: 99.9610% ( 1) 00:49:04.512 5835.825 - 5867.032: 99.9622% ( 1) 00:49:04.512 5867.032 - 5898.240: 99.9635% ( 1) 00:49:04.512 5898.240 - 5929.448: 99.9647% ( 1) 00:49:04.512 5960.655 - 5991.863: 99.9659% ( 1) 00:49:04.512 5991.863 - 6023.070: 99.9671% ( 1) 00:49:04.512 6023.070 - 6054.278: 99.9683% ( 1) 00:49:04.512 6054.278 - 6085.486: 99.9696% ( 1) 00:49:04.512 6085.486 - 6116.693: 99.9708% ( 1) 00:49:04.512 6116.693 - 6147.901: 99.9720% ( 1) 00:49:04.512 6147.901 - 6179.109: 99.9732% ( 1) 00:49:04.512 6210.316 - 6241.524: 99.9744% ( 1) 00:49:04.512 6241.524 - 6272.731: 99.9756% ( 1) 00:49:04.512 6272.731 - 6303.939: 99.9769% ( 1) 00:49:04.512 6303.939 - 6335.147: 99.9781% ( 1) 00:49:04.512 6335.147 - 6366.354: 99.9793% ( 1) 00:49:04.512 6366.354 - 6397.562: 99.9805% ( 1) 00:49:04.512 6397.562 - 6428.770: 99.9817% ( 1) 00:49:04.512 6428.770 - 6459.977: 99.9830% ( 1) 00:49:04.512 6459.977 - 6491.185: 99.9842% ( 1) 00:49:04.512 6522.392 - 6553.600: 99.9854% ( 1) 00:49:04.512 6553.600 - 6584.808: 99.9866% ( 1) 00:49:04.512 6584.808 - 6616.015: 99.9878% ( 1) 00:49:04.512 6616.015 - 6647.223: 99.9890% ( 1) 00:49:04.512 6647.223 - 6678.430: 99.9903% ( 1) 00:49:04.512 6678.430 - 6709.638: 99.9915% ( 1) 00:49:04.512 6709.638 - 6740.846: 99.9927% ( 1) 00:49:04.512 6772.053 - 6803.261: 99.9939% ( 1) 00:49:04.512 6803.261 - 6834.469: 99.9951% ( 1) 00:49:04.512 6834.469 - 6865.676: 99.9963% ( 1) 00:49:04.512 6865.676 - 6896.884: 99.9976% ( 1) 00:49:04.512 6896.884 - 6928.091: 99.9988% ( 1) 00:49:04.512 7021.714 - 7052.922: 100.0000% ( 1) 00:49:04.512 00:49:04.512 19:42:20 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:49:05.888 Initializing NVMe Controllers 00:49:05.888 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:05.888 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:49:05.888 Initialization complete. Launching workers. 00:49:05.888 ======================================================== 00:49:05.888 Latency(us) 00:49:05.888 Device Information : IOPS MiB/s Average min max 00:49:05.888 PCIE (0000:00:10.0) NSID 1 from core 0: 43464.79 509.35 2944.04 940.14 8189.54 00:49:05.888 ======================================================== 00:49:05.889 Total : 43464.79 509.35 2944.04 940.14 8189.54 00:49:05.889 00:49:05.889 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:49:05.889 ================================================================================= 00:49:05.889 1.00000% : 1263.909us 00:49:05.889 10.00000% : 1646.202us 00:49:05.889 25.00000% : 2137.722us 00:49:05.889 50.00000% : 2715.063us 00:49:05.889 75.00000% : 3744.914us 00:49:05.889 90.00000% : 4587.520us 00:49:05.889 95.00000% : 4868.389us 00:49:05.889 98.00000% : 5055.634us 00:49:05.889 99.00000% : 5430.126us 00:49:05.889 99.50000% : 6272.731us 00:49:05.889 99.90000% : 7552.244us 00:49:05.889 99.99000% : 8051.566us 00:49:05.889 99.99900% : 8238.811us 00:49:05.889 99.99990% : 8238.811us 00:49:05.889 99.99999% : 8238.811us 00:49:05.889 00:49:05.889 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:49:05.889 ============================================================================== 00:49:05.889 Range in us Cumulative IO count 00:49:05.889 940.130 - 944.030: 0.0023% ( 1) 00:49:05.889 979.139 - 983.040: 0.0046% ( 1) 00:49:05.889 998.644 - 1006.446: 0.0069% ( 1) 00:49:05.889 1006.446 - 1014.248: 0.0092% ( 1) 00:49:05.889 1014.248 - 1022.050: 0.0184% ( 4) 00:49:05.889 1022.050 - 1029.851: 0.0207% ( 1) 00:49:05.889 1029.851 - 1037.653: 0.0322% ( 5) 00:49:05.889 1037.653 - 1045.455: 0.0391% ( 3) 00:49:05.889 1045.455 - 1053.257: 0.0506% ( 5) 00:49:05.889 1053.257 - 1061.059: 0.0575% ( 3) 00:49:05.889 1061.059 - 1068.861: 0.0644% ( 3) 00:49:05.889 1068.861 - 1076.663: 0.0759% ( 5) 00:49:05.889 1076.663 - 1084.465: 0.0897% ( 6) 00:49:05.889 1084.465 - 1092.267: 0.1035% ( 6) 00:49:05.889 1092.267 - 1100.069: 0.1127% ( 4) 00:49:05.889 1100.069 - 1107.870: 0.1241% ( 5) 00:49:05.889 1107.870 - 1115.672: 0.1448% ( 9) 00:49:05.889 1115.672 - 1123.474: 0.1678% ( 10) 00:49:05.889 1123.474 - 1131.276: 0.1862% ( 8) 00:49:05.889 1131.276 - 1139.078: 0.2000% ( 6) 00:49:05.889 1139.078 - 1146.880: 0.2207% ( 9) 00:49:05.889 1146.880 - 1154.682: 0.2483% ( 12) 00:49:05.889 1154.682 - 1162.484: 0.2759% ( 12) 00:49:05.889 1162.484 - 1170.286: 0.2943% ( 8) 00:49:05.889 1170.286 - 1178.088: 0.3288% ( 15) 00:49:05.889 1178.088 - 1185.890: 0.3724% ( 19) 00:49:05.889 1185.890 - 1193.691: 0.4299% ( 25) 00:49:05.889 1193.691 - 1201.493: 0.4805% ( 22) 00:49:05.889 1201.493 - 1209.295: 0.5357% ( 24) 00:49:05.889 1209.295 - 1217.097: 0.5863% ( 22) 00:49:05.889 1217.097 - 1224.899: 0.6345% ( 21) 00:49:05.889 1224.899 - 1232.701: 0.7058% ( 31) 00:49:05.889 1232.701 - 1240.503: 0.7748% ( 30) 00:49:05.889 1240.503 - 1248.305: 0.8553% ( 35) 00:49:05.889 1248.305 - 1256.107: 0.9265% ( 31) 00:49:05.889 1256.107 - 1263.909: 1.0162% ( 39) 00:49:05.889 1263.909 - 1271.710: 1.1173% ( 44) 00:49:05.889 1271.710 - 1279.512: 1.2093% ( 40) 00:49:05.889 1279.512 - 1287.314: 1.3013% ( 40) 00:49:05.889 1287.314 - 1295.116: 1.4139% ( 49) 00:49:05.889 1295.116 - 1302.918: 1.5312% ( 51) 00:49:05.889 1302.918 - 1310.720: 1.6415% ( 48) 00:49:05.889 1310.720 - 1318.522: 1.7772% ( 59) 00:49:05.889 1318.522 - 1326.324: 1.9358% ( 69) 00:49:05.889 1326.324 - 1334.126: 2.1013% ( 72) 00:49:05.889 1334.126 - 1341.928: 2.2485% ( 64) 00:49:05.889 1341.928 - 1349.730: 2.4278% ( 78) 00:49:05.889 1349.730 - 1357.531: 2.6071% ( 78) 00:49:05.889 1357.531 - 1365.333: 2.7612% ( 67) 00:49:05.889 1365.333 - 1373.135: 2.9336% ( 75) 00:49:05.889 1373.135 - 1380.937: 3.1451% ( 92) 00:49:05.889 1380.937 - 1388.739: 3.3221% ( 77) 00:49:05.889 1388.739 - 1396.541: 3.5291% ( 90) 00:49:05.889 1396.541 - 1404.343: 3.7153% ( 81) 00:49:05.889 1404.343 - 1412.145: 3.9130% ( 86) 00:49:05.889 1412.145 - 1419.947: 4.1337% ( 96) 00:49:05.889 1419.947 - 1427.749: 4.3337% ( 87) 00:49:05.889 1427.749 - 1435.550: 4.5292% ( 85) 00:49:05.889 1435.550 - 1443.352: 4.7338% ( 89) 00:49:05.889 1443.352 - 1451.154: 4.9706% ( 103) 00:49:05.889 1451.154 - 1458.956: 5.1637% ( 84) 00:49:05.889 1458.956 - 1466.758: 5.3775% ( 93) 00:49:05.889 1466.758 - 1474.560: 5.6166% ( 104) 00:49:05.889 1474.560 - 1482.362: 5.8557% ( 104) 00:49:05.889 1482.362 - 1490.164: 6.0419% ( 81) 00:49:05.889 1490.164 - 1497.966: 6.2580% ( 94) 00:49:05.889 1497.966 - 1505.768: 6.4742% ( 94) 00:49:05.889 1505.768 - 1513.570: 6.6788% ( 89) 00:49:05.889 1513.570 - 1521.371: 6.8972% ( 95) 00:49:05.889 1521.371 - 1529.173: 7.1202% ( 97) 00:49:05.889 1529.173 - 1536.975: 7.3340% ( 93) 00:49:05.889 1536.975 - 1544.777: 7.5317% ( 86) 00:49:05.889 1544.777 - 1552.579: 7.7134% ( 79) 00:49:05.889 1552.579 - 1560.381: 7.9456% ( 101) 00:49:05.889 1560.381 - 1568.183: 8.1525% ( 90) 00:49:05.889 1568.183 - 1575.985: 8.2950% ( 62) 00:49:05.889 1575.985 - 1583.787: 8.5088% ( 93) 00:49:05.889 1583.787 - 1591.589: 8.7111% ( 88) 00:49:05.889 1591.589 - 1599.390: 8.8997% ( 82) 00:49:05.889 1599.390 - 1607.192: 9.1181% ( 95) 00:49:05.889 1607.192 - 1614.994: 9.2882% ( 74) 00:49:05.889 1614.994 - 1622.796: 9.4767% ( 82) 00:49:05.889 1622.796 - 1630.598: 9.6469% ( 74) 00:49:05.889 1630.598 - 1638.400: 9.8492% ( 88) 00:49:05.889 1638.400 - 1646.202: 10.0262% ( 77) 00:49:05.889 1646.202 - 1654.004: 10.1940% ( 73) 00:49:05.889 1654.004 - 1661.806: 10.3803% ( 81) 00:49:05.889 1661.806 - 1669.608: 10.5343% ( 67) 00:49:05.889 1669.608 - 1677.410: 10.7366% ( 88) 00:49:05.889 1677.410 - 1685.211: 10.9389% ( 88) 00:49:05.889 1685.211 - 1693.013: 11.1091% ( 74) 00:49:05.889 1693.013 - 1700.815: 11.2861% ( 77) 00:49:05.889 1700.815 - 1708.617: 11.4470% ( 70) 00:49:05.889 1708.617 - 1716.419: 11.6631% ( 94) 00:49:05.889 1716.419 - 1724.221: 11.8402% ( 77) 00:49:05.889 1724.221 - 1732.023: 12.0195% ( 78) 00:49:05.889 1732.023 - 1739.825: 12.2287% ( 91) 00:49:05.889 1739.825 - 1747.627: 12.4011% ( 75) 00:49:05.889 1747.627 - 1755.429: 12.5943% ( 84) 00:49:05.889 1755.429 - 1763.230: 12.7805% ( 81) 00:49:05.889 1763.230 - 1771.032: 13.0127% ( 101) 00:49:05.889 1771.032 - 1778.834: 13.2357% ( 97) 00:49:05.889 1778.834 - 1786.636: 13.4311% ( 85) 00:49:05.889 1786.636 - 1794.438: 13.6334% ( 88) 00:49:05.889 1794.438 - 1802.240: 13.8495% ( 94) 00:49:05.889 1802.240 - 1810.042: 14.0657% ( 94) 00:49:05.889 1810.042 - 1817.844: 14.2979% ( 101) 00:49:05.889 1817.844 - 1825.646: 14.4933% ( 85) 00:49:05.889 1825.646 - 1833.448: 14.6933% ( 87) 00:49:05.889 1833.448 - 1841.250: 14.9071% ( 93) 00:49:05.889 1841.250 - 1849.051: 15.1370% ( 100) 00:49:05.889 1849.051 - 1856.853: 15.3485% ( 92) 00:49:05.889 1856.853 - 1864.655: 15.6336% ( 124) 00:49:05.889 1864.655 - 1872.457: 15.8451% ( 92) 00:49:05.889 1872.457 - 1880.259: 16.0842% ( 104) 00:49:05.889 1880.259 - 1888.061: 16.3003% ( 94) 00:49:05.889 1888.061 - 1895.863: 16.5532% ( 110) 00:49:05.889 1895.863 - 1903.665: 16.7441% ( 83) 00:49:05.889 1903.665 - 1911.467: 16.9855% ( 105) 00:49:05.889 1911.467 - 1919.269: 17.2062% ( 96) 00:49:05.889 1919.269 - 1927.070: 17.4430% ( 103) 00:49:05.889 1927.070 - 1934.872: 17.6959% ( 110) 00:49:05.889 1934.872 - 1942.674: 17.9419% ( 107) 00:49:05.889 1942.674 - 1950.476: 18.1764% ( 102) 00:49:05.889 1950.476 - 1958.278: 18.4293% ( 110) 00:49:05.889 1958.278 - 1966.080: 18.7052% ( 120) 00:49:05.889 1966.080 - 1973.882: 18.9994% ( 128) 00:49:05.889 1973.882 - 1981.684: 19.2523% ( 110) 00:49:05.889 1981.684 - 1989.486: 19.5328% ( 122) 00:49:05.889 1989.486 - 1997.288: 19.7926% ( 113) 00:49:05.889 1997.288 - 2012.891: 20.3582% ( 246) 00:49:05.889 2012.891 - 2028.495: 20.8985% ( 235) 00:49:05.889 2028.495 - 2044.099: 21.4916% ( 258) 00:49:05.889 2044.099 - 2059.703: 22.0894% ( 260) 00:49:05.889 2059.703 - 2075.307: 22.6894% ( 261) 00:49:05.889 2075.307 - 2090.910: 23.3861% ( 303) 00:49:05.889 2090.910 - 2106.514: 24.0091% ( 271) 00:49:05.889 2106.514 - 2122.118: 24.7011% ( 301) 00:49:05.889 2122.118 - 2137.722: 25.3219% ( 270) 00:49:05.889 2137.722 - 2153.326: 26.0070% ( 298) 00:49:05.889 2153.326 - 2168.930: 26.7105% ( 306) 00:49:05.889 2168.930 - 2184.533: 27.3542% ( 280) 00:49:05.889 2184.533 - 2200.137: 28.0876% ( 319) 00:49:05.889 2200.137 - 2215.741: 28.7797% ( 301) 00:49:05.889 2215.741 - 2231.345: 29.5315% ( 327) 00:49:05.889 2231.345 - 2246.949: 30.2924% ( 331) 00:49:05.889 2246.949 - 2262.552: 31.0304% ( 321) 00:49:05.889 2262.552 - 2278.156: 31.8397% ( 352) 00:49:05.889 2278.156 - 2293.760: 32.6191% ( 339) 00:49:05.889 2293.760 - 2309.364: 33.3985% ( 339) 00:49:05.889 2309.364 - 2324.968: 34.1595% ( 331) 00:49:05.889 2324.968 - 2340.571: 34.9480% ( 343) 00:49:05.889 2340.571 - 2356.175: 35.7205% ( 336) 00:49:05.889 2356.175 - 2371.779: 36.4102% ( 300) 00:49:05.889 2371.779 - 2387.383: 37.1781% ( 334) 00:49:05.889 2387.383 - 2402.987: 37.9253% ( 325) 00:49:05.889 2402.987 - 2418.590: 38.6013% ( 294) 00:49:05.889 2418.590 - 2434.194: 39.3301% ( 317) 00:49:05.889 2434.194 - 2449.798: 40.0543% ( 315) 00:49:05.889 2449.798 - 2465.402: 40.7233% ( 291) 00:49:05.889 2465.402 - 2481.006: 41.3486% ( 272) 00:49:05.889 2481.006 - 2496.610: 42.0315% ( 297) 00:49:05.889 2496.610 - 2512.213: 42.7143% ( 297) 00:49:05.889 2512.213 - 2527.817: 43.3281% ( 267) 00:49:05.889 2527.817 - 2543.421: 44.0546% ( 316) 00:49:05.889 2543.421 - 2559.025: 44.6478% ( 258) 00:49:05.889 2559.025 - 2574.629: 45.2340% ( 255) 00:49:05.889 2574.629 - 2590.232: 45.8663% ( 275) 00:49:05.889 2590.232 - 2605.836: 46.4709% ( 263) 00:49:05.889 2605.836 - 2621.440: 47.0595% ( 256) 00:49:05.889 2621.440 - 2637.044: 47.6343% ( 250) 00:49:05.890 2637.044 - 2652.648: 48.1768% ( 236) 00:49:05.890 2652.648 - 2668.251: 48.7355% ( 243) 00:49:05.890 2668.251 - 2683.855: 49.3034% ( 247) 00:49:05.890 2683.855 - 2699.459: 49.7931% ( 213) 00:49:05.890 2699.459 - 2715.063: 50.2874% ( 215) 00:49:05.890 2715.063 - 2730.667: 50.8024% ( 224) 00:49:05.890 2730.667 - 2746.270: 51.2553% ( 197) 00:49:05.890 2746.270 - 2761.874: 51.7703% ( 224) 00:49:05.890 2761.874 - 2777.478: 52.2554% ( 211) 00:49:05.890 2777.478 - 2793.082: 52.7014% ( 194) 00:49:05.890 2793.082 - 2808.686: 53.2049% ( 219) 00:49:05.890 2808.686 - 2824.290: 53.7107% ( 220) 00:49:05.890 2824.290 - 2839.893: 54.1820% ( 205) 00:49:05.890 2839.893 - 2855.497: 54.6464% ( 202) 00:49:05.890 2855.497 - 2871.101: 55.0970% ( 196) 00:49:05.890 2871.101 - 2886.705: 55.5867% ( 213) 00:49:05.890 2886.705 - 2902.309: 56.0465% ( 200) 00:49:05.890 2902.309 - 2917.912: 56.5385% ( 214) 00:49:05.890 2917.912 - 2933.516: 57.0075% ( 204) 00:49:05.890 2933.516 - 2949.120: 57.4306% ( 184) 00:49:05.890 2949.120 - 2964.724: 57.8743% ( 193) 00:49:05.890 2964.724 - 2980.328: 58.3065% ( 188) 00:49:05.890 2980.328 - 2995.931: 58.7433% ( 190) 00:49:05.890 2995.931 - 3011.535: 59.1664% ( 184) 00:49:05.890 3011.535 - 3027.139: 59.5687% ( 175) 00:49:05.890 3027.139 - 3042.743: 59.9825% ( 180) 00:49:05.890 3042.743 - 3058.347: 60.3826% ( 174) 00:49:05.890 3058.347 - 3073.950: 60.7251% ( 149) 00:49:05.890 3073.950 - 3089.554: 61.1734% ( 195) 00:49:05.890 3089.554 - 3105.158: 61.5597% ( 168) 00:49:05.890 3105.158 - 3120.762: 61.9229% ( 158) 00:49:05.890 3120.762 - 3136.366: 62.3253% ( 175) 00:49:05.890 3136.366 - 3151.970: 62.6954% ( 161) 00:49:05.890 3151.970 - 3167.573: 63.0587% ( 158) 00:49:05.890 3167.573 - 3183.177: 63.3897% ( 144) 00:49:05.890 3183.177 - 3198.781: 63.7576% ( 160) 00:49:05.890 3198.781 - 3214.385: 64.1185% ( 157) 00:49:05.890 3214.385 - 3229.989: 64.4542% ( 146) 00:49:05.890 3229.989 - 3245.592: 64.8358% ( 166) 00:49:05.890 3245.592 - 3261.196: 65.1600% ( 141) 00:49:05.890 3261.196 - 3276.800: 65.4934% ( 145) 00:49:05.890 3276.800 - 3292.404: 65.8612% ( 160) 00:49:05.890 3292.404 - 3308.008: 66.1923% ( 144) 00:49:05.890 3308.008 - 3323.611: 66.5624% ( 161) 00:49:05.890 3323.611 - 3339.215: 66.9004% ( 147) 00:49:05.890 3339.215 - 3354.819: 67.2637% ( 158) 00:49:05.890 3354.819 - 3370.423: 67.6085% ( 150) 00:49:05.890 3370.423 - 3386.027: 67.9442% ( 146) 00:49:05.890 3386.027 - 3401.630: 68.3396% ( 172) 00:49:05.890 3401.630 - 3417.234: 68.6707% ( 144) 00:49:05.890 3417.234 - 3432.838: 69.0362% ( 159) 00:49:05.890 3432.838 - 3448.442: 69.3696% ( 145) 00:49:05.890 3448.442 - 3464.046: 69.6662% ( 129) 00:49:05.890 3464.046 - 3479.650: 70.0133% ( 151) 00:49:05.890 3479.650 - 3495.253: 70.3513% ( 147) 00:49:05.890 3495.253 - 3510.857: 70.6548% ( 132) 00:49:05.890 3510.857 - 3526.461: 70.9674% ( 136) 00:49:05.890 3526.461 - 3542.065: 71.2755% ( 134) 00:49:05.890 3542.065 - 3557.669: 71.5882% ( 136) 00:49:05.890 3557.669 - 3573.272: 71.8848% ( 129) 00:49:05.890 3573.272 - 3588.876: 72.2043% ( 139) 00:49:05.890 3588.876 - 3604.480: 72.5124% ( 134) 00:49:05.890 3604.480 - 3620.084: 72.8021% ( 126) 00:49:05.890 3620.084 - 3635.688: 73.1010% ( 130) 00:49:05.890 3635.688 - 3651.291: 73.4205% ( 139) 00:49:05.890 3651.291 - 3666.895: 73.7010% ( 122) 00:49:05.890 3666.895 - 3682.499: 73.9907% ( 126) 00:49:05.890 3682.499 - 3698.103: 74.3402% ( 152) 00:49:05.890 3698.103 - 3713.707: 74.6069% ( 116) 00:49:05.890 3713.707 - 3729.310: 74.8690% ( 114) 00:49:05.890 3729.310 - 3744.914: 75.1701% ( 131) 00:49:05.890 3744.914 - 3760.518: 75.4782% ( 134) 00:49:05.890 3760.518 - 3776.122: 75.7633% ( 124) 00:49:05.890 3776.122 - 3791.726: 76.1013% ( 147) 00:49:05.890 3791.726 - 3807.330: 76.4001% ( 130) 00:49:05.890 3807.330 - 3822.933: 76.6645% ( 115) 00:49:05.890 3822.933 - 3838.537: 76.9450% ( 122) 00:49:05.890 3838.537 - 3854.141: 77.3129% ( 160) 00:49:05.890 3854.141 - 3869.745: 77.5704% ( 112) 00:49:05.890 3869.745 - 3885.349: 77.8508% ( 122) 00:49:05.890 3885.349 - 3900.952: 78.1865% ( 146) 00:49:05.890 3900.952 - 3916.556: 78.4532% ( 116) 00:49:05.890 3916.556 - 3932.160: 78.7245% ( 118) 00:49:05.890 3932.160 - 3947.764: 79.0463% ( 140) 00:49:05.890 3947.764 - 3963.368: 79.3314% ( 124) 00:49:05.890 3963.368 - 3978.971: 79.6188% ( 125) 00:49:05.890 3978.971 - 3994.575: 79.8901% ( 118) 00:49:05.890 3994.575 - 4025.783: 80.4925% ( 262) 00:49:05.890 4025.783 - 4056.990: 81.0281% ( 233) 00:49:05.890 4056.990 - 4088.198: 81.6075% ( 252) 00:49:05.890 4088.198 - 4119.406: 82.1616% ( 241) 00:49:05.890 4119.406 - 4150.613: 82.7111% ( 239) 00:49:05.890 4150.613 - 4181.821: 83.3065% ( 259) 00:49:05.890 4181.821 - 4213.029: 83.8284% ( 227) 00:49:05.890 4213.029 - 4244.236: 84.4055% ( 251) 00:49:05.890 4244.236 - 4275.444: 84.9733% ( 247) 00:49:05.890 4275.444 - 4306.651: 85.5021% ( 230) 00:49:05.890 4306.651 - 4337.859: 86.0792% ( 251) 00:49:05.890 4337.859 - 4369.067: 86.6103% ( 231) 00:49:05.890 4369.067 - 4400.274: 87.1896% ( 252) 00:49:05.890 4400.274 - 4431.482: 87.7184% ( 230) 00:49:05.890 4431.482 - 4462.690: 88.2978% ( 252) 00:49:05.890 4462.690 - 4493.897: 88.8656% ( 247) 00:49:05.890 4493.897 - 4525.105: 89.3875% ( 227) 00:49:05.890 4525.105 - 4556.312: 89.9600% ( 249) 00:49:05.890 4556.312 - 4587.520: 90.5003% ( 235) 00:49:05.890 4587.520 - 4618.728: 91.0589% ( 243) 00:49:05.890 4618.728 - 4649.935: 91.6153% ( 242) 00:49:05.890 4649.935 - 4681.143: 92.1924% ( 251) 00:49:05.890 4681.143 - 4712.350: 92.7695% ( 251) 00:49:05.890 4712.350 - 4743.558: 93.3074% ( 234) 00:49:05.890 4743.558 - 4774.766: 93.8546% ( 238) 00:49:05.890 4774.766 - 4805.973: 94.4225% ( 247) 00:49:05.890 4805.973 - 4837.181: 94.9352% ( 223) 00:49:05.890 4837.181 - 4868.389: 95.5076% ( 249) 00:49:05.890 4868.389 - 4899.596: 95.9789% ( 205) 00:49:05.890 4899.596 - 4930.804: 96.5008% ( 227) 00:49:05.890 4930.804 - 4962.011: 97.0066% ( 220) 00:49:05.890 4962.011 - 4993.219: 97.4411% ( 189) 00:49:05.890 4993.219 - 5024.427: 97.8435% ( 175) 00:49:05.890 5024.427 - 5055.634: 98.1539% ( 135) 00:49:05.890 5055.634 - 5086.842: 98.4182% ( 115) 00:49:05.890 5086.842 - 5118.050: 98.6114% ( 84) 00:49:05.890 5118.050 - 5149.257: 98.7171% ( 46) 00:49:05.890 5149.257 - 5180.465: 98.7815% ( 28) 00:49:05.890 5180.465 - 5211.672: 98.8160% ( 15) 00:49:05.890 5211.672 - 5242.880: 98.8528% ( 16) 00:49:05.890 5242.880 - 5274.088: 98.8827% ( 13) 00:49:05.890 5274.088 - 5305.295: 98.9171% ( 15) 00:49:05.890 5305.295 - 5336.503: 98.9424% ( 11) 00:49:05.890 5336.503 - 5367.710: 98.9723% ( 13) 00:49:05.890 5367.710 - 5398.918: 98.9953% ( 10) 00:49:05.890 5398.918 - 5430.126: 99.0206% ( 11) 00:49:05.890 5430.126 - 5461.333: 99.0390% ( 8) 00:49:05.890 5461.333 - 5492.541: 99.0712% ( 14) 00:49:05.890 5492.541 - 5523.749: 99.0942% ( 10) 00:49:05.890 5523.749 - 5554.956: 99.1241% ( 13) 00:49:05.890 5554.956 - 5586.164: 99.1379% ( 6) 00:49:05.890 5586.164 - 5617.371: 99.1654% ( 12) 00:49:05.890 5617.371 - 5648.579: 99.1907% ( 11) 00:49:05.890 5648.579 - 5679.787: 99.2091% ( 8) 00:49:05.890 5679.787 - 5710.994: 99.2275% ( 8) 00:49:05.890 5710.994 - 5742.202: 99.2436% ( 7) 00:49:05.890 5742.202 - 5773.410: 99.2643% ( 9) 00:49:05.890 5773.410 - 5804.617: 99.2850% ( 9) 00:49:05.890 5804.617 - 5835.825: 99.3011% ( 7) 00:49:05.890 5835.825 - 5867.032: 99.3149% ( 6) 00:49:05.890 5867.032 - 5898.240: 99.3310% ( 7) 00:49:05.890 5898.240 - 5929.448: 99.3471% ( 7) 00:49:05.890 5929.448 - 5960.655: 99.3632% ( 7) 00:49:05.890 5960.655 - 5991.863: 99.3816% ( 8) 00:49:05.890 5991.863 - 6023.070: 99.3999% ( 8) 00:49:05.890 6023.070 - 6054.278: 99.4137% ( 6) 00:49:05.890 6054.278 - 6085.486: 99.4298% ( 7) 00:49:05.890 6085.486 - 6116.693: 99.4436% ( 6) 00:49:05.890 6116.693 - 6147.901: 99.4528% ( 4) 00:49:05.890 6147.901 - 6179.109: 99.4689% ( 7) 00:49:05.890 6179.109 - 6210.316: 99.4804% ( 5) 00:49:05.890 6210.316 - 6241.524: 99.4942% ( 6) 00:49:05.890 6241.524 - 6272.731: 99.5057% ( 5) 00:49:05.890 6272.731 - 6303.939: 99.5149% ( 4) 00:49:05.890 6303.939 - 6335.147: 99.5287% ( 6) 00:49:05.890 6335.147 - 6366.354: 99.5379% ( 4) 00:49:05.890 6366.354 - 6397.562: 99.5494% ( 5) 00:49:05.890 6397.562 - 6428.770: 99.5609% ( 5) 00:49:05.890 6428.770 - 6459.977: 99.5747% ( 6) 00:49:05.890 6459.977 - 6491.185: 99.5862% ( 5) 00:49:05.890 6491.185 - 6522.392: 99.6000% ( 6) 00:49:05.890 6522.392 - 6553.600: 99.6115% ( 5) 00:49:05.890 6553.600 - 6584.808: 99.6253% ( 6) 00:49:05.890 6584.808 - 6616.015: 99.6505% ( 11) 00:49:05.890 6616.015 - 6647.223: 99.6666% ( 7) 00:49:05.890 6647.223 - 6678.430: 99.6781% ( 5) 00:49:05.890 6678.430 - 6709.638: 99.6919% ( 6) 00:49:05.890 6709.638 - 6740.846: 99.6965% ( 2) 00:49:05.890 6740.846 - 6772.053: 99.7057% ( 4) 00:49:05.890 6772.053 - 6803.261: 99.7172% ( 5) 00:49:05.890 6803.261 - 6834.469: 99.7195% ( 1) 00:49:05.890 6834.469 - 6865.676: 99.7310% ( 5) 00:49:05.890 6865.676 - 6896.884: 99.7356% ( 2) 00:49:05.890 6896.884 - 6928.091: 99.7471% ( 5) 00:49:05.890 6928.091 - 6959.299: 99.7540% ( 3) 00:49:05.890 6959.299 - 6990.507: 99.7609% ( 3) 00:49:05.890 6990.507 - 7021.714: 99.7701% ( 4) 00:49:05.890 7021.714 - 7052.922: 99.7747% ( 2) 00:49:05.890 7052.922 - 7084.130: 99.7862% ( 5) 00:49:05.890 7084.130 - 7115.337: 99.7954% ( 4) 00:49:05.890 7115.337 - 7146.545: 99.8000% ( 2) 00:49:05.890 7146.545 - 7177.752: 99.8115% ( 5) 00:49:05.890 7177.752 - 7208.960: 99.8161% ( 2) 00:49:05.891 7208.960 - 7240.168: 99.8253% ( 4) 00:49:05.891 7240.168 - 7271.375: 99.8322% ( 3) 00:49:05.891 7271.375 - 7302.583: 99.8414% ( 4) 00:49:05.891 7302.583 - 7333.790: 99.8483% ( 3) 00:49:05.891 7333.790 - 7364.998: 99.8575% ( 4) 00:49:05.891 7364.998 - 7396.206: 99.8667% ( 4) 00:49:05.891 7396.206 - 7427.413: 99.8713% ( 2) 00:49:05.891 7427.413 - 7458.621: 99.8804% ( 4) 00:49:05.891 7458.621 - 7489.829: 99.8919% ( 5) 00:49:05.891 7489.829 - 7521.036: 99.8988% ( 3) 00:49:05.891 7521.036 - 7552.244: 99.9057% ( 3) 00:49:05.891 7552.244 - 7583.451: 99.9126% ( 3) 00:49:05.891 7583.451 - 7614.659: 99.9218% ( 4) 00:49:05.891 7614.659 - 7645.867: 99.9287% ( 3) 00:49:05.891 7645.867 - 7677.074: 99.9379% ( 4) 00:49:05.891 7677.074 - 7708.282: 99.9448% ( 3) 00:49:05.891 7708.282 - 7739.490: 99.9494% ( 2) 00:49:05.891 7739.490 - 7770.697: 99.9540% ( 2) 00:49:05.891 7770.697 - 7801.905: 99.9563% ( 1) 00:49:05.891 7801.905 - 7833.112: 99.9609% ( 2) 00:49:05.891 7833.112 - 7864.320: 99.9678% ( 3) 00:49:05.891 7864.320 - 7895.528: 99.9701% ( 1) 00:49:05.891 7895.528 - 7926.735: 99.9747% ( 2) 00:49:05.891 7926.735 - 7957.943: 99.9793% ( 2) 00:49:05.891 7957.943 - 7989.150: 99.9839% ( 2) 00:49:05.891 7989.150 - 8051.566: 99.9908% ( 3) 00:49:05.891 8051.566 - 8113.981: 99.9954% ( 2) 00:49:05.891 8113.981 - 8176.396: 99.9977% ( 1) 00:49:05.891 8176.396 - 8238.811: 100.0000% ( 1) 00:49:05.891 00:49:05.891 19:42:21 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:49:05.891 00:49:05.891 real 0m2.731s 00:49:05.891 user 0m2.293s 00:49:05.891 sys 0m0.290s 00:49:05.891 ************************************ 00:49:05.891 END TEST nvme_perf 00:49:05.891 ************************************ 00:49:05.891 19:42:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:05.891 19:42:21 -- common/autotest_common.sh@10 -- # set +x 00:49:05.891 19:42:21 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:49:05.891 19:42:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:49:05.891 19:42:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:05.891 19:42:21 -- common/autotest_common.sh@10 -- # set +x 00:49:05.891 ************************************ 00:49:05.891 START TEST nvme_hello_world 00:49:05.891 ************************************ 00:49:05.891 19:42:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:49:06.486 Initializing NVMe Controllers 00:49:06.486 Attached to 0000:00:10.0 00:49:06.486 Namespace ID: 1 size: 5GB 00:49:06.486 Initialization complete. 00:49:06.486 INFO: using host memory buffer for IO 00:49:06.486 Hello world! 00:49:06.486 00:49:06.486 real 0m0.385s 00:49:06.486 user 0m0.127s 00:49:06.486 sys 0m0.173s 00:49:06.486 19:42:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:06.486 19:42:22 -- common/autotest_common.sh@10 -- # set +x 00:49:06.486 ************************************ 00:49:06.486 END TEST nvme_hello_world 00:49:06.486 ************************************ 00:49:06.486 19:42:22 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:49:06.486 19:42:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:06.487 19:42:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:06.487 19:42:22 -- common/autotest_common.sh@10 -- # set +x 00:49:06.487 ************************************ 00:49:06.487 START TEST nvme_sgl 00:49:06.487 ************************************ 00:49:06.487 19:42:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:49:06.745 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:49:06.745 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:49:06.745 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:49:06.745 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:49:06.745 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:49:06.745 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:49:06.745 NVMe Readv/Writev Request test 00:49:06.745 Attached to 0000:00:10.0 00:49:06.745 0000:00:10.0: build_io_request_2 test passed 00:49:06.745 0000:00:10.0: build_io_request_4 test passed 00:49:06.745 0000:00:10.0: build_io_request_5 test passed 00:49:06.745 0000:00:10.0: build_io_request_6 test passed 00:49:06.745 0000:00:10.0: build_io_request_7 test passed 00:49:06.745 0000:00:10.0: build_io_request_10 test passed 00:49:06.745 Cleaning up... 00:49:06.745 00:49:06.745 real 0m0.404s 00:49:06.745 user 0m0.192s 00:49:06.745 sys 0m0.123s 00:49:06.745 19:42:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:06.745 19:42:22 -- common/autotest_common.sh@10 -- # set +x 00:49:06.745 ************************************ 00:49:06.745 END TEST nvme_sgl 00:49:06.745 ************************************ 00:49:06.745 19:42:22 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:49:06.745 19:42:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:06.745 19:42:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:06.745 19:42:22 -- common/autotest_common.sh@10 -- # set +x 00:49:06.745 ************************************ 00:49:06.745 START TEST nvme_e2edp 00:49:06.745 ************************************ 00:49:06.745 19:42:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:49:07.312 NVMe Write/Read with End-to-End data protection test 00:49:07.312 Attached to 0000:00:10.0 00:49:07.312 Cleaning up... 00:49:07.312 00:49:07.312 real 0m0.364s 00:49:07.312 user 0m0.107s 00:49:07.312 sys 0m0.177s 00:49:07.312 19:42:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:07.312 19:42:23 -- common/autotest_common.sh@10 -- # set +x 00:49:07.312 ************************************ 00:49:07.312 END TEST nvme_e2edp 00:49:07.312 ************************************ 00:49:07.312 19:42:23 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:49:07.312 19:42:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:07.312 19:42:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:07.312 19:42:23 -- common/autotest_common.sh@10 -- # set +x 00:49:07.312 ************************************ 00:49:07.312 START TEST nvme_reserve 00:49:07.312 ************************************ 00:49:07.312 19:42:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:49:07.570 ===================================================== 00:49:07.570 NVMe Controller at PCI bus 0, device 16, function 0 00:49:07.570 ===================================================== 00:49:07.570 Reservations: Not Supported 00:49:07.570 Reservation test passed 00:49:07.828 00:49:07.828 real 0m0.397s 00:49:07.828 user 0m0.158s 00:49:07.828 sys 0m0.153s 00:49:07.828 19:42:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:07.828 ************************************ 00:49:07.828 END TEST nvme_reserve 00:49:07.828 19:42:23 -- common/autotest_common.sh@10 -- # set +x 00:49:07.828 ************************************ 00:49:07.828 19:42:23 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:49:07.828 19:42:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:07.828 19:42:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:07.828 19:42:23 -- common/autotest_common.sh@10 -- # set +x 00:49:07.828 ************************************ 00:49:07.828 START TEST nvme_err_injection 00:49:07.828 ************************************ 00:49:07.828 19:42:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:49:08.086 NVMe Error Injection test 00:49:08.086 Attached to 0000:00:10.0 00:49:08.086 0000:00:10.0: get features failed as expected 00:49:08.086 0000:00:10.0: get features successfully as expected 00:49:08.086 0000:00:10.0: read failed as expected 00:49:08.086 0000:00:10.0: read successfully as expected 00:49:08.086 Cleaning up... 00:49:08.086 00:49:08.086 real 0m0.345s 00:49:08.086 user 0m0.108s 00:49:08.086 sys 0m0.148s 00:49:08.086 19:42:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:08.086 19:42:23 -- common/autotest_common.sh@10 -- # set +x 00:49:08.086 ************************************ 00:49:08.086 END TEST nvme_err_injection 00:49:08.086 ************************************ 00:49:08.086 19:42:23 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:49:08.086 19:42:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:49:08.086 19:42:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:08.086 19:42:23 -- common/autotest_common.sh@10 -- # set +x 00:49:08.086 ************************************ 00:49:08.086 START TEST nvme_overhead 00:49:08.086 ************************************ 00:49:08.086 19:42:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:49:09.462 Initializing NVMe Controllers 00:49:09.462 Attached to 0000:00:10.0 00:49:09.462 Initialization complete. Launching workers. 00:49:09.462 submit (in ns) avg, min, max = 17011.4, 12824.8, 159942.9 00:49:09.462 complete (in ns) avg, min, max = 11613.3, 9177.1, 4059231.4 00:49:09.462 00:49:09.462 Submit histogram 00:49:09.462 ================ 00:49:09.462 Range in us Cumulative Count 00:49:09.462 12.800 - 12.861: 0.0110% ( 1) 00:49:09.462 13.288 - 13.349: 0.0219% ( 1) 00:49:09.462 13.775 - 13.836: 0.0329% ( 1) 00:49:09.462 13.958 - 14.019: 0.0439% ( 1) 00:49:09.462 14.019 - 14.080: 0.0549% ( 1) 00:49:09.462 14.080 - 14.141: 0.1755% ( 11) 00:49:09.462 14.141 - 14.202: 0.5485% ( 34) 00:49:09.462 14.202 - 14.263: 1.2287% ( 62) 00:49:09.462 14.263 - 14.324: 2.8195% ( 145) 00:49:09.462 14.324 - 14.385: 5.4306% ( 238) 00:49:09.462 14.385 - 14.446: 8.9523% ( 321) 00:49:09.462 14.446 - 14.507: 12.4410% ( 318) 00:49:09.462 14.507 - 14.568: 15.6226% ( 290) 00:49:09.462 14.568 - 14.629: 18.3214% ( 246) 00:49:09.462 14.629 - 14.690: 20.3182% ( 182) 00:49:09.462 14.690 - 14.750: 21.8431% ( 139) 00:49:09.462 14.750 - 14.811: 23.2913% ( 132) 00:49:09.462 14.811 - 14.872: 24.6736% ( 126) 00:49:09.462 14.872 - 14.933: 26.2315% ( 142) 00:49:09.462 14.933 - 14.994: 27.9978% ( 161) 00:49:09.462 14.994 - 15.055: 30.3127% ( 211) 00:49:09.462 15.055 - 15.116: 32.8470% ( 231) 00:49:09.462 15.116 - 15.177: 35.3264% ( 226) 00:49:09.462 15.177 - 15.238: 37.5315% ( 201) 00:49:09.462 15.238 - 15.299: 39.3527% ( 166) 00:49:09.462 15.299 - 15.360: 40.8228% ( 134) 00:49:09.462 15.360 - 15.421: 41.9967% ( 107) 00:49:09.462 15.421 - 15.482: 43.0060% ( 92) 00:49:09.462 15.482 - 15.543: 43.6204% ( 56) 00:49:09.462 15.543 - 15.604: 44.1909% ( 52) 00:49:09.462 15.604 - 15.726: 44.8382% ( 59) 00:49:09.462 15.726 - 15.848: 45.1783% ( 31) 00:49:09.462 15.848 - 15.970: 45.5403% ( 33) 00:49:09.462 15.970 - 16.091: 45.9133% ( 34) 00:49:09.462 16.091 - 16.213: 46.2973% ( 35) 00:49:09.462 16.213 - 16.335: 46.6045% ( 28) 00:49:09.462 16.335 - 16.457: 47.1201% ( 47) 00:49:09.462 16.457 - 16.579: 48.2392% ( 102) 00:49:09.462 16.579 - 16.701: 51.9473% ( 338) 00:49:09.462 16.701 - 16.823: 56.9830% ( 459) 00:49:09.462 16.823 - 16.945: 62.2271% ( 478) 00:49:09.462 16.945 - 17.067: 66.4070% ( 381) 00:49:09.462 17.067 - 17.189: 69.0730% ( 243) 00:49:09.462 17.189 - 17.310: 70.7954% ( 157) 00:49:09.462 17.310 - 17.432: 71.9254% ( 103) 00:49:09.462 17.432 - 17.554: 72.7482% ( 75) 00:49:09.462 17.554 - 17.676: 73.3297% ( 53) 00:49:09.462 17.676 - 17.798: 73.9002% ( 52) 00:49:09.462 17.798 - 17.920: 74.3390% ( 40) 00:49:09.462 17.920 - 18.042: 74.8437% ( 46) 00:49:09.462 18.042 - 18.164: 75.2715% ( 39) 00:49:09.462 18.164 - 18.286: 76.0505% ( 71) 00:49:09.462 18.286 - 18.408: 76.5990% ( 50) 00:49:09.462 18.408 - 18.530: 77.4218% ( 75) 00:49:09.462 18.530 - 18.651: 78.3544% ( 85) 00:49:09.462 18.651 - 18.773: 79.3637% ( 92) 00:49:09.462 18.773 - 18.895: 80.3182% ( 87) 00:49:09.462 18.895 - 19.017: 81.3714% ( 96) 00:49:09.462 19.017 - 19.139: 82.0516% ( 62) 00:49:09.462 19.139 - 19.261: 82.8854% ( 76) 00:49:09.462 19.261 - 19.383: 83.6862% ( 73) 00:49:09.462 19.383 - 19.505: 84.4652% ( 71) 00:49:09.462 19.505 - 19.627: 85.2002% ( 67) 00:49:09.462 19.627 - 19.749: 85.9792% ( 71) 00:49:09.462 19.749 - 19.870: 86.7142% ( 67) 00:49:09.462 19.870 - 19.992: 87.2518% ( 49) 00:49:09.462 19.992 - 20.114: 87.9320% ( 62) 00:49:09.462 20.114 - 20.236: 88.6561% ( 66) 00:49:09.462 20.236 - 20.358: 89.3472% ( 63) 00:49:09.462 20.358 - 20.480: 89.9726% ( 57) 00:49:09.462 20.480 - 20.602: 90.5101% ( 49) 00:49:09.462 20.602 - 20.724: 91.0368% ( 48) 00:49:09.462 20.724 - 20.846: 91.6402% ( 55) 00:49:09.462 20.846 - 20.968: 92.1887% ( 50) 00:49:09.462 20.968 - 21.090: 92.7043% ( 47) 00:49:09.462 21.090 - 21.211: 93.2419% ( 49) 00:49:09.462 21.211 - 21.333: 93.6807% ( 40) 00:49:09.462 21.333 - 21.455: 94.0757% ( 36) 00:49:09.462 21.455 - 21.577: 94.3719% ( 27) 00:49:09.462 21.577 - 21.699: 94.7010% ( 30) 00:49:09.462 21.699 - 21.821: 94.9863% ( 26) 00:49:09.462 21.821 - 21.943: 95.2386% ( 23) 00:49:09.462 21.943 - 22.065: 95.5019% ( 24) 00:49:09.462 22.065 - 22.187: 95.7543% ( 23) 00:49:09.462 22.187 - 22.309: 96.0176% ( 24) 00:49:09.462 22.309 - 22.430: 96.2041% ( 17) 00:49:09.462 22.430 - 22.552: 96.4344% ( 21) 00:49:09.462 22.552 - 22.674: 96.6100% ( 16) 00:49:09.462 22.674 - 22.796: 96.7636% ( 14) 00:49:09.462 22.796 - 22.918: 96.8623% ( 9) 00:49:09.462 22.918 - 23.040: 96.9281% ( 6) 00:49:09.462 23.040 - 23.162: 97.0159% ( 8) 00:49:09.462 23.162 - 23.284: 97.1476% ( 12) 00:49:09.462 23.284 - 23.406: 97.2573% ( 10) 00:49:09.462 23.406 - 23.528: 97.3670% ( 10) 00:49:09.462 23.528 - 23.650: 97.4109% ( 4) 00:49:09.462 23.650 - 23.771: 97.5206% ( 10) 00:49:09.462 23.771 - 23.893: 97.6083% ( 8) 00:49:09.462 23.893 - 24.015: 97.6851% ( 7) 00:49:09.462 24.015 - 24.137: 97.7290% ( 4) 00:49:09.462 24.137 - 24.259: 97.8058% ( 7) 00:49:09.462 24.259 - 24.381: 97.8607% ( 5) 00:49:09.462 24.381 - 24.503: 97.9813% ( 11) 00:49:09.462 24.503 - 24.625: 98.0143% ( 3) 00:49:09.462 24.625 - 24.747: 98.0801% ( 6) 00:49:09.462 24.747 - 24.869: 98.1349% ( 5) 00:49:09.462 24.869 - 24.990: 98.2008% ( 6) 00:49:09.462 24.990 - 25.112: 98.2447% ( 4) 00:49:09.462 25.112 - 25.234: 98.3105% ( 6) 00:49:09.462 25.234 - 25.356: 98.3982% ( 8) 00:49:09.462 25.356 - 25.478: 98.4421% ( 4) 00:49:09.462 25.478 - 25.600: 98.4860% ( 4) 00:49:09.462 25.600 - 25.722: 98.5080% ( 2) 00:49:09.462 25.722 - 25.844: 98.5518% ( 4) 00:49:09.462 25.844 - 25.966: 98.5957% ( 4) 00:49:09.462 25.966 - 26.088: 98.6396% ( 4) 00:49:09.462 26.088 - 26.210: 98.6835% ( 4) 00:49:09.462 26.210 - 26.331: 98.7054% ( 2) 00:49:09.462 26.331 - 26.453: 98.7164% ( 1) 00:49:09.463 26.453 - 26.575: 98.7383% ( 2) 00:49:09.463 26.575 - 26.697: 98.7713% ( 3) 00:49:09.463 26.697 - 26.819: 98.8261% ( 5) 00:49:09.463 26.819 - 26.941: 98.8481% ( 2) 00:49:09.463 26.941 - 27.063: 98.8590% ( 1) 00:49:09.463 27.063 - 27.185: 98.9029% ( 4) 00:49:09.463 27.429 - 27.550: 98.9248% ( 2) 00:49:09.463 27.550 - 27.672: 98.9578% ( 3) 00:49:09.463 27.672 - 27.794: 99.0016% ( 4) 00:49:09.463 27.794 - 27.916: 99.0236% ( 2) 00:49:09.463 27.916 - 28.038: 99.0455% ( 2) 00:49:09.463 28.038 - 28.160: 99.0675% ( 2) 00:49:09.463 28.160 - 28.282: 99.0784% ( 1) 00:49:09.463 28.282 - 28.404: 99.1004% ( 2) 00:49:09.463 28.404 - 28.526: 99.1552% ( 5) 00:49:09.463 28.526 - 28.648: 99.1991% ( 4) 00:49:09.463 28.648 - 28.770: 99.2101% ( 1) 00:49:09.463 28.770 - 28.891: 99.2211% ( 1) 00:49:09.463 29.013 - 29.135: 99.2430% ( 2) 00:49:09.463 29.257 - 29.379: 99.2540% ( 1) 00:49:09.463 29.501 - 29.623: 99.2869% ( 3) 00:49:09.463 29.623 - 29.745: 99.3088% ( 2) 00:49:09.463 29.745 - 29.867: 99.3308% ( 2) 00:49:09.463 29.867 - 29.989: 99.3417% ( 1) 00:49:09.463 29.989 - 30.110: 99.3637% ( 2) 00:49:09.463 30.232 - 30.354: 99.3747% ( 1) 00:49:09.463 30.476 - 30.598: 99.4076% ( 3) 00:49:09.463 30.598 - 30.720: 99.4405% ( 3) 00:49:09.463 30.720 - 30.842: 99.4624% ( 2) 00:49:09.463 30.964 - 31.086: 99.4953% ( 3) 00:49:09.463 31.086 - 31.208: 99.5283% ( 3) 00:49:09.463 31.208 - 31.451: 99.5502% ( 2) 00:49:09.463 31.451 - 31.695: 99.5721% ( 2) 00:49:09.463 31.939 - 32.183: 99.5941% ( 2) 00:49:09.463 32.183 - 32.427: 99.6270% ( 3) 00:49:09.463 32.914 - 33.158: 99.6380% ( 1) 00:49:09.463 33.402 - 33.646: 99.6489% ( 1) 00:49:09.463 33.890 - 34.133: 99.6599% ( 1) 00:49:09.463 34.377 - 34.621: 99.6709% ( 1) 00:49:09.463 34.865 - 35.109: 99.6818% ( 1) 00:49:09.463 35.109 - 35.352: 99.6928% ( 1) 00:49:09.463 35.352 - 35.596: 99.7038% ( 1) 00:49:09.463 35.596 - 35.840: 99.7367% ( 3) 00:49:09.463 35.840 - 36.084: 99.7477% ( 1) 00:49:09.463 36.084 - 36.328: 99.7586% ( 1) 00:49:09.463 37.547 - 37.790: 99.7806% ( 2) 00:49:09.463 38.278 - 38.522: 99.8025% ( 2) 00:49:09.463 39.010 - 39.253: 99.8135% ( 1) 00:49:09.463 39.253 - 39.497: 99.8245% ( 1) 00:49:09.463 39.741 - 39.985: 99.8354% ( 1) 00:49:09.463 39.985 - 40.229: 99.8464% ( 1) 00:49:09.463 40.229 - 40.472: 99.8574% ( 1) 00:49:09.463 40.472 - 40.716: 99.8683% ( 1) 00:49:09.463 41.204 - 41.448: 99.8793% ( 1) 00:49:09.463 42.179 - 42.423: 99.8903% ( 1) 00:49:09.463 44.617 - 44.861: 99.9013% ( 1) 00:49:09.463 45.592 - 45.836: 99.9122% ( 1) 00:49:09.463 47.055 - 47.299: 99.9232% ( 1) 00:49:09.463 49.006 - 49.250: 99.9342% ( 1) 00:49:09.463 72.168 - 72.655: 99.9451% ( 1) 00:49:09.463 74.118 - 74.606: 99.9561% ( 1) 00:49:09.463 99.962 - 100.450: 99.9671% ( 1) 00:49:09.463 102.888 - 103.375: 99.9781% ( 1) 00:49:09.463 141.410 - 142.385: 99.9890% ( 1) 00:49:09.463 159.939 - 160.914: 100.0000% ( 1) 00:49:09.463 00:49:09.463 Complete histogram 00:49:09.463 ================== 00:49:09.463 Range in us Cumulative Count 00:49:09.463 9.143 - 9.204: 0.0549% ( 5) 00:49:09.463 9.204 - 9.265: 2.3258% ( 207) 00:49:09.463 9.265 - 9.326: 8.9413% ( 603) 00:49:09.463 9.326 - 9.387: 14.6242% ( 518) 00:49:09.463 9.387 - 9.448: 18.6725% ( 369) 00:49:09.463 9.448 - 9.509: 21.0752% ( 219) 00:49:09.463 9.509 - 9.570: 22.5014% ( 130) 00:49:09.463 9.570 - 9.630: 24.4981% ( 182) 00:49:09.463 9.630 - 9.691: 28.7767% ( 390) 00:49:09.463 9.691 - 9.752: 33.5162% ( 432) 00:49:09.463 9.752 - 9.813: 36.6429% ( 285) 00:49:09.463 9.813 - 9.874: 38.4531% ( 165) 00:49:09.463 9.874 - 9.935: 39.3198% ( 79) 00:49:09.463 9.935 - 9.996: 40.0219% ( 64) 00:49:09.463 9.996 - 10.057: 40.4608% ( 40) 00:49:09.463 10.057 - 10.118: 40.9325% ( 43) 00:49:09.463 10.118 - 10.179: 41.4262% ( 45) 00:49:09.463 10.179 - 10.240: 41.7773% ( 32) 00:49:09.463 10.240 - 10.301: 42.1613% ( 35) 00:49:09.463 10.301 - 10.362: 42.5453% ( 35) 00:49:09.463 10.362 - 10.423: 42.7976% ( 23) 00:49:09.463 10.423 - 10.484: 43.0609% ( 24) 00:49:09.463 10.484 - 10.545: 43.4339% ( 34) 00:49:09.463 10.545 - 10.606: 43.9715% ( 49) 00:49:09.463 10.606 - 10.667: 44.7175% ( 68) 00:49:09.463 10.667 - 10.728: 46.9007% ( 199) 00:49:09.463 10.728 - 10.789: 52.1668% ( 480) 00:49:09.463 10.789 - 10.850: 57.2902% ( 467) 00:49:09.463 10.850 - 10.910: 61.7663% ( 408) 00:49:09.463 10.910 - 10.971: 64.5749% ( 256) 00:49:09.463 10.971 - 11.032: 66.7032% ( 194) 00:49:09.463 11.032 - 11.093: 68.1733% ( 134) 00:49:09.463 11.093 - 11.154: 69.2595% ( 99) 00:49:09.463 11.154 - 11.215: 69.9616% ( 64) 00:49:09.463 11.215 - 11.276: 70.5101% ( 50) 00:49:09.463 11.276 - 11.337: 70.8941% ( 35) 00:49:09.463 11.337 - 11.398: 71.3439% ( 41) 00:49:09.463 11.398 - 11.459: 71.7499% ( 37) 00:49:09.463 11.459 - 11.520: 72.0570% ( 28) 00:49:09.463 11.520 - 11.581: 72.3533% ( 27) 00:49:09.463 11.581 - 11.642: 72.5837% ( 21) 00:49:09.463 11.642 - 11.703: 72.8579% ( 25) 00:49:09.463 11.703 - 11.764: 73.1103% ( 23) 00:49:09.463 11.764 - 11.825: 73.4723% ( 33) 00:49:09.463 11.825 - 11.886: 73.8234% ( 32) 00:49:09.463 11.886 - 11.947: 74.0867% ( 24) 00:49:09.463 11.947 - 12.008: 74.4487% ( 33) 00:49:09.463 12.008 - 12.069: 74.7230% ( 25) 00:49:09.463 12.069 - 12.130: 75.0082% ( 26) 00:49:09.463 12.130 - 12.190: 75.4032% ( 36) 00:49:09.463 12.190 - 12.251: 75.8969% ( 45) 00:49:09.463 12.251 - 12.312: 76.4454% ( 50) 00:49:09.463 12.312 - 12.373: 77.0159% ( 52) 00:49:09.463 12.373 - 12.434: 77.5864% ( 52) 00:49:09.463 12.434 - 12.495: 78.3434% ( 69) 00:49:09.463 12.495 - 12.556: 78.8700% ( 48) 00:49:09.463 12.556 - 12.617: 79.6818% ( 74) 00:49:09.463 12.617 - 12.678: 80.4279% ( 68) 00:49:09.463 12.678 - 12.739: 80.9984% ( 52) 00:49:09.463 12.739 - 12.800: 81.5798% ( 53) 00:49:09.463 12.800 - 12.861: 82.3368% ( 69) 00:49:09.463 12.861 - 12.922: 83.0609% ( 66) 00:49:09.463 12.922 - 12.983: 83.6753% ( 56) 00:49:09.463 12.983 - 13.044: 84.4652% ( 72) 00:49:09.463 13.044 - 13.105: 85.0466% ( 53) 00:49:09.463 13.105 - 13.166: 85.6391% ( 54) 00:49:09.463 13.166 - 13.227: 86.2754% ( 58) 00:49:09.463 13.227 - 13.288: 87.0214% ( 68) 00:49:09.463 13.288 - 13.349: 87.6796% ( 60) 00:49:09.463 13.349 - 13.410: 88.2172% ( 49) 00:49:09.463 13.410 - 13.470: 88.7658% ( 50) 00:49:09.463 13.470 - 13.531: 89.3143% ( 50) 00:49:09.463 13.531 - 13.592: 89.9616% ( 59) 00:49:09.463 13.592 - 13.653: 90.3785% ( 38) 00:49:09.463 13.653 - 13.714: 90.8064% ( 39) 00:49:09.463 13.714 - 13.775: 91.1794% ( 34) 00:49:09.463 13.775 - 13.836: 91.6292% ( 41) 00:49:09.463 13.836 - 13.897: 92.1229% ( 45) 00:49:09.463 13.897 - 13.958: 92.5069% ( 35) 00:49:09.463 13.958 - 14.019: 92.8470% ( 31) 00:49:09.463 14.019 - 14.080: 93.2419% ( 36) 00:49:09.463 14.080 - 14.141: 93.5930% ( 32) 00:49:09.463 14.141 - 14.202: 93.9221% ( 30) 00:49:09.463 14.202 - 14.263: 94.2183% ( 27) 00:49:09.463 14.263 - 14.324: 94.4707% ( 23) 00:49:09.463 14.324 - 14.385: 94.7559% ( 26) 00:49:09.463 14.385 - 14.446: 94.9753% ( 20) 00:49:09.463 14.446 - 14.507: 95.2386% ( 24) 00:49:09.463 14.507 - 14.568: 95.4690% ( 21) 00:49:09.463 14.568 - 14.629: 95.6994% ( 21) 00:49:09.463 14.629 - 14.690: 95.8201% ( 11) 00:49:09.463 14.690 - 14.750: 96.0066% ( 17) 00:49:09.463 14.750 - 14.811: 96.1382% ( 12) 00:49:09.463 14.811 - 14.872: 96.2699% ( 12) 00:49:09.463 14.872 - 14.933: 96.4564% ( 17) 00:49:09.463 14.933 - 14.994: 96.6539% ( 18) 00:49:09.463 14.994 - 15.055: 96.7855% ( 12) 00:49:09.463 15.055 - 15.116: 96.8733% ( 8) 00:49:09.463 15.116 - 15.177: 96.9281% ( 5) 00:49:09.463 15.177 - 15.238: 97.0049% ( 7) 00:49:09.463 15.238 - 15.299: 97.0488% ( 4) 00:49:09.463 15.299 - 15.360: 97.1476% ( 9) 00:49:09.463 15.360 - 15.421: 97.2463% ( 9) 00:49:09.463 15.421 - 15.482: 97.3231% ( 7) 00:49:09.463 15.482 - 15.543: 97.3779% ( 5) 00:49:09.463 15.543 - 15.604: 97.4767% ( 9) 00:49:09.463 15.604 - 15.726: 97.6193% ( 13) 00:49:09.463 15.726 - 15.848: 97.7729% ( 14) 00:49:09.463 15.848 - 15.970: 97.8058% ( 3) 00:49:09.463 15.970 - 16.091: 97.9046% ( 9) 00:49:09.463 16.091 - 16.213: 97.9265% ( 2) 00:49:09.463 16.213 - 16.335: 97.9594% ( 3) 00:49:09.463 16.335 - 16.457: 98.0252% ( 6) 00:49:09.463 16.457 - 16.579: 98.0911% ( 6) 00:49:09.463 16.579 - 16.701: 98.1459% ( 5) 00:49:09.463 16.701 - 16.823: 98.1788% ( 3) 00:49:09.463 16.823 - 16.945: 98.2117% ( 3) 00:49:09.463 16.945 - 17.067: 98.2556% ( 4) 00:49:09.463 17.067 - 17.189: 98.3214% ( 6) 00:49:09.463 17.189 - 17.310: 98.3434% ( 2) 00:49:09.463 17.310 - 17.432: 98.3763% ( 3) 00:49:09.463 17.432 - 17.554: 98.3982% ( 2) 00:49:09.463 17.554 - 17.676: 98.4202% ( 2) 00:49:09.464 17.676 - 17.798: 98.4421% ( 2) 00:49:09.464 17.798 - 17.920: 98.5189% ( 7) 00:49:09.464 17.920 - 18.042: 98.5299% ( 1) 00:49:09.464 18.042 - 18.164: 98.5738% ( 4) 00:49:09.464 18.164 - 18.286: 98.6067% ( 3) 00:49:09.464 18.286 - 18.408: 98.6177% ( 1) 00:49:09.464 18.408 - 18.530: 98.6396% ( 2) 00:49:09.464 18.530 - 18.651: 98.7054% ( 6) 00:49:09.464 18.651 - 18.773: 98.7383% ( 3) 00:49:09.464 18.773 - 18.895: 98.7493% ( 1) 00:49:09.464 18.895 - 19.017: 98.7713% ( 2) 00:49:09.464 19.139 - 19.261: 98.7822% ( 1) 00:49:09.464 19.261 - 19.383: 98.8261% ( 4) 00:49:09.464 19.383 - 19.505: 98.8371% ( 1) 00:49:09.464 19.505 - 19.627: 98.8481% ( 1) 00:49:09.464 19.627 - 19.749: 98.8810% ( 3) 00:49:09.464 19.749 - 19.870: 98.9139% ( 3) 00:49:09.464 19.870 - 19.992: 98.9358% ( 2) 00:49:09.464 20.114 - 20.236: 98.9468% ( 1) 00:49:09.464 20.236 - 20.358: 98.9578% ( 1) 00:49:09.464 20.358 - 20.480: 98.9687% ( 1) 00:49:09.464 20.602 - 20.724: 98.9907% ( 2) 00:49:09.464 20.724 - 20.846: 99.0236% ( 3) 00:49:09.464 20.846 - 20.968: 99.0675% ( 4) 00:49:09.464 20.968 - 21.090: 99.0894% ( 2) 00:49:09.464 21.090 - 21.211: 99.1114% ( 2) 00:49:09.464 21.333 - 21.455: 99.1223% ( 1) 00:49:09.464 21.455 - 21.577: 99.1333% ( 1) 00:49:09.464 21.577 - 21.699: 99.1443% ( 1) 00:49:09.464 21.699 - 21.821: 99.1552% ( 1) 00:49:09.464 21.821 - 21.943: 99.1772% ( 2) 00:49:09.464 21.943 - 22.065: 99.1991% ( 2) 00:49:09.464 22.065 - 22.187: 99.2101% ( 1) 00:49:09.464 22.187 - 22.309: 99.2540% ( 4) 00:49:09.464 22.552 - 22.674: 99.2649% ( 1) 00:49:09.464 22.796 - 22.918: 99.2759% ( 1) 00:49:09.464 22.918 - 23.040: 99.2869% ( 1) 00:49:09.464 23.406 - 23.528: 99.2979% ( 1) 00:49:09.464 23.528 - 23.650: 99.3088% ( 1) 00:49:09.464 23.650 - 23.771: 99.3417% ( 3) 00:49:09.464 23.771 - 23.893: 99.3637% ( 2) 00:49:09.464 24.259 - 24.381: 99.3856% ( 2) 00:49:09.464 24.381 - 24.503: 99.4076% ( 2) 00:49:09.464 24.503 - 24.625: 99.4185% ( 1) 00:49:09.464 24.990 - 25.112: 99.4295% ( 1) 00:49:09.464 25.356 - 25.478: 99.4515% ( 2) 00:49:09.464 25.478 - 25.600: 99.4624% ( 1) 00:49:09.464 25.600 - 25.722: 99.4844% ( 2) 00:49:09.464 25.844 - 25.966: 99.5063% ( 2) 00:49:09.464 26.088 - 26.210: 99.5173% ( 1) 00:49:09.464 26.210 - 26.331: 99.5392% ( 2) 00:49:09.464 26.331 - 26.453: 99.5612% ( 2) 00:49:09.464 26.575 - 26.697: 99.5721% ( 1) 00:49:09.464 27.429 - 27.550: 99.5831% ( 1) 00:49:09.464 27.794 - 27.916: 99.6050% ( 2) 00:49:09.464 28.160 - 28.282: 99.6160% ( 1) 00:49:09.464 28.404 - 28.526: 99.6599% ( 4) 00:49:09.464 28.526 - 28.648: 99.6709% ( 1) 00:49:09.464 28.648 - 28.770: 99.6818% ( 1) 00:49:09.464 28.770 - 28.891: 99.6928% ( 1) 00:49:09.464 29.257 - 29.379: 99.7148% ( 2) 00:49:09.464 29.623 - 29.745: 99.7257% ( 1) 00:49:09.464 29.989 - 30.110: 99.7367% ( 1) 00:49:09.464 30.476 - 30.598: 99.7477% ( 1) 00:49:09.464 30.720 - 30.842: 99.7696% ( 2) 00:49:09.464 30.842 - 30.964: 99.8025% ( 3) 00:49:09.464 31.086 - 31.208: 99.8135% ( 1) 00:49:09.464 31.451 - 31.695: 99.8354% ( 2) 00:49:09.464 31.695 - 31.939: 99.8464% ( 1) 00:49:09.464 31.939 - 32.183: 99.8574% ( 1) 00:49:09.464 32.427 - 32.670: 99.8683% ( 1) 00:49:09.464 32.670 - 32.914: 99.8793% ( 1) 00:49:09.464 33.646 - 33.890: 99.8903% ( 1) 00:49:09.464 34.133 - 34.377: 99.9013% ( 1) 00:49:09.464 35.352 - 35.596: 99.9122% ( 1) 00:49:09.464 35.596 - 35.840: 99.9232% ( 1) 00:49:09.464 37.790 - 38.034: 99.9342% ( 1) 00:49:09.464 41.448 - 41.691: 99.9451% ( 1) 00:49:09.464 41.691 - 41.935: 99.9561% ( 1) 00:49:09.464 47.055 - 47.299: 99.9671% ( 1) 00:49:09.464 49.737 - 49.981: 99.9781% ( 1) 00:49:09.464 104.838 - 105.326: 99.9890% ( 1) 00:49:09.464 4056.990 - 4088.198: 100.0000% ( 1) 00:49:09.464 00:49:09.464 00:49:09.464 real 0m1.344s 00:49:09.464 user 0m1.141s 00:49:09.464 sys 0m0.125s 00:49:09.464 19:42:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:09.464 19:42:25 -- common/autotest_common.sh@10 -- # set +x 00:49:09.464 ************************************ 00:49:09.464 END TEST nvme_overhead 00:49:09.464 ************************************ 00:49:09.464 19:42:25 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:49:09.464 19:42:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:49:09.464 19:42:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:09.464 19:42:25 -- common/autotest_common.sh@10 -- # set +x 00:49:09.464 ************************************ 00:49:09.464 START TEST nvme_arbitration 00:49:09.464 ************************************ 00:49:09.464 19:42:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:49:13.647 Initializing NVMe Controllers 00:49:13.647 Attached to 0000:00:10.0 00:49:13.647 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:49:13.647 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:49:13.647 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:49:13.647 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:49:13.647 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:49:13.647 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:49:13.647 Initialization complete. Launching workers. 00:49:13.647 Starting thread on core 1 with urgent priority queue 00:49:13.647 Starting thread on core 2 with urgent priority queue 00:49:13.647 Starting thread on core 3 with urgent priority queue 00:49:13.647 Starting thread on core 0 with urgent priority queue 00:49:13.647 QEMU NVMe Ctrl (12340 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:49:13.647 QEMU NVMe Ctrl (12340 ) core 1: 810.67 IO/s 123.36 secs/100000 ios 00:49:13.647 QEMU NVMe Ctrl (12340 ) core 2: 512.00 IO/s 195.31 secs/100000 ios 00:49:13.647 QEMU NVMe Ctrl (12340 ) core 3: 426.67 IO/s 234.38 secs/100000 ios 00:49:13.647 ======================================================== 00:49:13.647 00:49:13.647 00:49:13.647 real 0m3.539s 00:49:13.647 user 0m9.549s 00:49:13.647 sys 0m0.132s 00:49:13.647 19:42:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:13.647 19:42:28 -- common/autotest_common.sh@10 -- # set +x 00:49:13.647 ************************************ 00:49:13.647 END TEST nvme_arbitration 00:49:13.647 ************************************ 00:49:13.647 19:42:28 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:49:13.647 19:42:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:49:13.647 19:42:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:13.647 19:42:28 -- common/autotest_common.sh@10 -- # set +x 00:49:13.647 ************************************ 00:49:13.647 START TEST nvme_single_aen 00:49:13.647 ************************************ 00:49:13.647 19:42:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:49:13.647 Asynchronous Event Request test 00:49:13.647 Attached to 0000:00:10.0 00:49:13.647 Reset controller to setup AER completions for this process 00:49:13.647 Registering asynchronous event callbacks... 00:49:13.647 Getting orig temperature thresholds of all controllers 00:49:13.647 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:49:13.647 Setting all controllers temperature threshold low to trigger AER 00:49:13.647 Waiting for all controllers temperature threshold to be set lower 00:49:13.647 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:49:13.647 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:49:13.647 Waiting for all controllers to trigger AER and reset threshold 00:49:13.647 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:49:13.647 Cleaning up... 00:49:13.647 00:49:13.647 real 0m0.329s 00:49:13.647 user 0m0.126s 00:49:13.647 sys 0m0.134s 00:49:13.647 19:42:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:13.647 ************************************ 00:49:13.647 END TEST nvme_single_aen 00:49:13.647 ************************************ 00:49:13.647 19:42:29 -- common/autotest_common.sh@10 -- # set +x 00:49:13.647 19:42:29 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:49:13.647 19:42:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:13.648 19:42:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:13.648 19:42:29 -- common/autotest_common.sh@10 -- # set +x 00:49:13.648 ************************************ 00:49:13.648 START TEST nvme_doorbell_aers 00:49:13.648 ************************************ 00:49:13.648 19:42:29 -- common/autotest_common.sh@1111 -- # nvme_doorbell_aers 00:49:13.648 19:42:29 -- nvme/nvme.sh@70 -- # bdfs=() 00:49:13.648 19:42:29 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:49:13.648 19:42:29 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:49:13.648 19:42:29 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:49:13.648 19:42:29 -- common/autotest_common.sh@1499 -- # bdfs=() 00:49:13.648 19:42:29 -- common/autotest_common.sh@1499 -- # local bdfs 00:49:13.648 19:42:29 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:49:13.648 19:42:29 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:49:13.648 19:42:29 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:49:13.648 19:42:29 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:49:13.648 19:42:29 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:49:13.648 19:42:29 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:49:13.648 19:42:29 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:49:13.905 [2024-04-18 19:42:29.787804] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152224) is not found. Dropping the request. 00:49:23.887 Executing: test_write_invalid_db 00:49:23.887 Waiting for AER completion... 00:49:23.887 Failure: test_write_invalid_db 00:49:23.887 00:49:23.887 Executing: test_invalid_db_write_overflow_sq 00:49:23.887 Waiting for AER completion... 00:49:23.887 Failure: test_invalid_db_write_overflow_sq 00:49:23.887 00:49:23.887 Executing: test_invalid_db_write_overflow_cq 00:49:23.887 Waiting for AER completion... 00:49:23.887 Failure: test_invalid_db_write_overflow_cq 00:49:23.887 00:49:23.887 00:49:23.887 real 0m10.114s 00:49:23.887 user 0m7.359s 00:49:23.887 sys 0m2.672s 00:49:23.887 19:42:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:23.887 19:42:39 -- common/autotest_common.sh@10 -- # set +x 00:49:23.887 ************************************ 00:49:23.887 END TEST nvme_doorbell_aers 00:49:23.887 ************************************ 00:49:23.887 19:42:39 -- nvme/nvme.sh@97 -- # uname 00:49:23.887 19:42:39 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:49:23.887 19:42:39 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:49:23.887 19:42:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:49:23.887 19:42:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:23.887 19:42:39 -- common/autotest_common.sh@10 -- # set +x 00:49:23.887 ************************************ 00:49:23.887 START TEST nvme_multi_aen 00:49:23.887 ************************************ 00:49:23.887 19:42:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:49:24.144 [2024-04-18 19:42:39.845498] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152224) is not found. Dropping the request. 00:49:24.144 [2024-04-18 19:42:39.846447] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152224) is not found. Dropping the request. 00:49:24.144 [2024-04-18 19:42:39.846659] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152224) is not found. Dropping the request. 00:49:24.144 Child process pid: 152432 00:49:24.402 [Child] Asynchronous Event Request test 00:49:24.402 [Child] Attached to 0000:00:10.0 00:49:24.402 [Child] Registering asynchronous event callbacks... 00:49:24.402 [Child] Getting orig temperature thresholds of all controllers 00:49:24.402 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:49:24.402 [Child] Waiting for all controllers to trigger AER and reset threshold 00:49:24.402 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:49:24.402 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:49:24.402 [Child] Cleaning up... 00:49:24.660 Asynchronous Event Request test 00:49:24.660 Attached to 0000:00:10.0 00:49:24.660 Reset controller to setup AER completions for this process 00:49:24.660 Registering asynchronous event callbacks... 00:49:24.660 Getting orig temperature thresholds of all controllers 00:49:24.660 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:49:24.660 Setting all controllers temperature threshold low to trigger AER 00:49:24.660 Waiting for all controllers temperature threshold to be set lower 00:49:24.660 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:49:24.660 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:49:24.660 Waiting for all controllers to trigger AER and reset threshold 00:49:24.660 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:49:24.660 Cleaning up... 00:49:24.660 00:49:24.660 real 0m0.812s 00:49:24.660 user 0m0.323s 00:49:24.660 sys 0m0.276s 00:49:24.660 19:42:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:24.660 19:42:40 -- common/autotest_common.sh@10 -- # set +x 00:49:24.660 ************************************ 00:49:24.660 END TEST nvme_multi_aen 00:49:24.660 ************************************ 00:49:24.660 19:42:40 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:49:24.660 19:42:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:49:24.660 19:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:24.660 19:42:40 -- common/autotest_common.sh@10 -- # set +x 00:49:24.660 ************************************ 00:49:24.660 START TEST nvme_startup 00:49:24.660 ************************************ 00:49:24.660 19:42:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:49:24.917 Initializing NVMe Controllers 00:49:24.917 Attached to 0000:00:10.0 00:49:24.917 Initialization complete. 00:49:24.917 Time used:237986.438 (us). 00:49:24.917 00:49:24.917 real 0m0.363s 00:49:24.917 user 0m0.111s 00:49:24.917 sys 0m0.168s 00:49:24.917 19:42:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:24.917 ************************************ 00:49:24.917 END TEST nvme_startup 00:49:24.917 19:42:40 -- common/autotest_common.sh@10 -- # set +x 00:49:24.917 ************************************ 00:49:25.175 19:42:40 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:49:25.175 19:42:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:25.175 19:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:25.175 19:42:40 -- common/autotest_common.sh@10 -- # set +x 00:49:25.175 ************************************ 00:49:25.175 START TEST nvme_multi_secondary 00:49:25.175 ************************************ 00:49:25.175 19:42:40 -- common/autotest_common.sh@1111 -- # nvme_multi_secondary 00:49:25.175 19:42:40 -- nvme/nvme.sh@52 -- # pid0=152514 00:49:25.175 19:42:40 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:49:25.175 19:42:40 -- nvme/nvme.sh@54 -- # pid1=152515 00:49:25.175 19:42:40 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:49:25.175 19:42:40 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:49:29.359 Initializing NVMe Controllers 00:49:29.359 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:29.359 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:49:29.359 Initialization complete. Launching workers. 00:49:29.360 ======================================================== 00:49:29.360 Latency(us) 00:49:29.360 Device Information : IOPS MiB/s Average min max 00:49:29.360 PCIE (0000:00:10.0) NSID 1 from core 1: 25525.32 99.71 626.47 157.43 6036.05 00:49:29.360 ======================================================== 00:49:29.360 Total : 25525.32 99.71 626.47 157.43 6036.05 00:49:29.360 00:49:29.360 Initializing NVMe Controllers 00:49:29.360 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:29.360 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:49:29.360 Initialization complete. Launching workers. 00:49:29.360 ======================================================== 00:49:29.360 Latency(us) 00:49:29.360 Device Information : IOPS MiB/s Average min max 00:49:29.360 PCIE (0000:00:10.0) NSID 1 from core 2: 12496.00 48.81 1279.96 168.71 17986.72 00:49:29.360 ======================================================== 00:49:29.360 Total : 12496.00 48.81 1279.96 168.71 17986.72 00:49:29.360 00:49:29.360 19:42:44 -- nvme/nvme.sh@56 -- # wait 152514 00:49:30.785 Initializing NVMe Controllers 00:49:30.785 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:30.785 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:49:30.785 Initialization complete. Launching workers. 00:49:30.785 ======================================================== 00:49:30.785 Latency(us) 00:49:30.785 Device Information : IOPS MiB/s Average min max 00:49:30.785 PCIE (0000:00:10.0) NSID 1 from core 0: 34236.80 133.74 466.84 155.31 13851.16 00:49:30.785 ======================================================== 00:49:30.785 Total : 34236.80 133.74 466.84 155.31 13851.16 00:49:30.785 00:49:30.785 19:42:46 -- nvme/nvme.sh@57 -- # wait 152515 00:49:30.785 19:42:46 -- nvme/nvme.sh@61 -- # pid0=152588 00:49:30.785 19:42:46 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:49:30.785 19:42:46 -- nvme/nvme.sh@63 -- # pid1=152589 00:49:30.785 19:42:46 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:49:30.785 19:42:46 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:49:34.138 Initializing NVMe Controllers 00:49:34.138 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:34.138 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:49:34.138 Initialization complete. Launching workers. 00:49:34.138 ======================================================== 00:49:34.138 Latency(us) 00:49:34.138 Device Information : IOPS MiB/s Average min max 00:49:34.138 PCIE (0000:00:10.0) NSID 1 from core 0: 31049.20 121.29 514.89 162.73 6137.45 00:49:34.138 ======================================================== 00:49:34.138 Total : 31049.20 121.29 514.89 162.73 6137.45 00:49:34.138 00:49:34.138 Initializing NVMe Controllers 00:49:34.138 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:34.138 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:49:34.138 Initialization complete. Launching workers. 00:49:34.138 ======================================================== 00:49:34.138 Latency(us) 00:49:34.138 Device Information : IOPS MiB/s Average min max 00:49:34.138 PCIE (0000:00:10.0) NSID 1 from core 1: 32549.32 127.15 491.19 162.41 7151.47 00:49:34.138 ======================================================== 00:49:34.138 Total : 32549.32 127.15 491.19 162.41 7151.47 00:49:34.138 00:49:36.082 Initializing NVMe Controllers 00:49:36.082 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:36.082 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:49:36.082 Initialization complete. Launching workers. 00:49:36.082 ======================================================== 00:49:36.082 Latency(us) 00:49:36.082 Device Information : IOPS MiB/s Average min max 00:49:36.082 PCIE (0000:00:10.0) NSID 1 from core 2: 17124.40 66.89 933.89 125.56 28969.92 00:49:36.082 ======================================================== 00:49:36.082 Total : 17124.40 66.89 933.89 125.56 28969.92 00:49:36.082 00:49:36.082 19:42:51 -- nvme/nvme.sh@65 -- # wait 152588 00:49:36.082 19:42:51 -- nvme/nvme.sh@66 -- # wait 152589 00:49:36.082 00:49:36.082 real 0m10.971s 00:49:36.082 user 0m18.772s 00:49:36.082 sys 0m1.050s 00:49:36.082 19:42:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:36.082 ************************************ 00:49:36.082 END TEST nvme_multi_secondary 00:49:36.082 ************************************ 00:49:36.082 19:42:51 -- common/autotest_common.sh@10 -- # set +x 00:49:36.082 19:42:51 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:49:36.082 19:42:51 -- nvme/nvme.sh@102 -- # kill_stub 00:49:36.082 19:42:51 -- common/autotest_common.sh@1075 -- # [[ -e /proc/151698 ]] 00:49:36.082 19:42:51 -- common/autotest_common.sh@1076 -- # kill 151698 00:49:36.082 19:42:51 -- common/autotest_common.sh@1077 -- # wait 151698 00:49:36.082 [2024-04-18 19:42:51.940610] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152431) is not found. Dropping the request. 00:49:36.082 [2024-04-18 19:42:51.940740] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152431) is not found. Dropping the request. 00:49:36.082 [2024-04-18 19:42:51.940784] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152431) is not found. Dropping the request. 00:49:36.082 [2024-04-18 19:42:51.940821] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152431) is not found. Dropping the request. 00:49:36.339 [2024-04-18 19:42:52.224772] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:49:36.339 19:42:52 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:49:36.339 19:42:52 -- common/autotest_common.sh@1083 -- # echo 2 00:49:36.339 19:42:52 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:49:36.339 19:42:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:36.339 19:42:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:36.339 19:42:52 -- common/autotest_common.sh@10 -- # set +x 00:49:36.597 ************************************ 00:49:36.597 START TEST bdev_nvme_reset_stuck_adm_cmd 00:49:36.597 ************************************ 00:49:36.597 19:42:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:49:36.597 * Looking for test storage... 00:49:36.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:49:36.597 19:42:52 -- common/autotest_common.sh@1510 -- # bdfs=() 00:49:36.597 19:42:52 -- common/autotest_common.sh@1510 -- # local bdfs 00:49:36.597 19:42:52 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:49:36.597 19:42:52 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:49:36.597 19:42:52 -- common/autotest_common.sh@1499 -- # bdfs=() 00:49:36.597 19:42:52 -- common/autotest_common.sh@1499 -- # local bdfs 00:49:36.597 19:42:52 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:49:36.597 19:42:52 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:49:36.597 19:42:52 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:49:36.597 19:42:52 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:49:36.597 19:42:52 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:49:36.597 19:42:52 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:49:36.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=152760 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:49:36.597 19:42:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 152760 00:49:36.597 19:42:52 -- common/autotest_common.sh@817 -- # '[' -z 152760 ']' 00:49:36.597 19:42:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:36.597 19:42:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:49:36.597 19:42:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:36.597 19:42:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:49:36.597 19:42:52 -- common/autotest_common.sh@10 -- # set +x 00:49:36.597 [2024-04-18 19:42:52.483109] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:49:36.598 [2024-04-18 19:42:52.483385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152760 ] 00:49:36.856 [2024-04-18 19:42:52.697284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:37.113 [2024-04-18 19:42:52.984972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:37.113 [2024-04-18 19:42:52.985059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:49:37.113 [2024-04-18 19:42:52.985175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:49:37.113 [2024-04-18 19:42:52.985187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:38.498 19:42:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:49:38.499 19:42:54 -- common/autotest_common.sh@850 -- # return 0 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:49:38.499 19:42:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:49:38.499 19:42:54 -- common/autotest_common.sh@10 -- # set +x 00:49:38.499 nvme0n1 00:49:38.499 19:42:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_lv4M2.txt 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:49:38.499 19:42:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:49:38.499 19:42:54 -- common/autotest_common.sh@10 -- # set +x 00:49:38.499 true 00:49:38.499 19:42:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1713469374 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=152795 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:49:38.499 19:42:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:49:40.395 19:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:49:40.395 19:42:56 -- common/autotest_common.sh@10 -- # set +x 00:49:40.395 [2024-04-18 19:42:56.100589] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:49:40.395 [2024-04-18 19:42:56.101548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:40.395 [2024-04-18 19:42:56.101725] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:49:40.395 [2024-04-18 19:42:56.101886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:40.395 [2024-04-18 19:42:56.103851] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:49:40.395 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 152795 00:49:40.395 19:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 152795 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 152795 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:40.395 19:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:49:40.395 19:42:56 -- common/autotest_common.sh@10 -- # set +x 00:49:40.395 19:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_lv4M2.txt 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:49:40.395 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:49:40.396 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:49:40.396 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:49:40.396 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:49:40.396 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:49:40.396 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:49:40.396 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:49:40.396 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_lv4M2.txt 00:49:40.396 19:42:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 152760 00:49:40.396 19:42:56 -- common/autotest_common.sh@936 -- # '[' -z 152760 ']' 00:49:40.396 19:42:56 -- common/autotest_common.sh@940 -- # kill -0 152760 00:49:40.396 19:42:56 -- common/autotest_common.sh@941 -- # uname 00:49:40.396 19:42:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:49:40.396 19:42:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 152760 00:49:40.396 killing process with pid 152760 00:49:40.396 19:42:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:49:40.396 19:42:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:49:40.396 19:42:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 152760' 00:49:40.396 19:42:56 -- common/autotest_common.sh@955 -- # kill 152760 00:49:40.396 19:42:56 -- common/autotest_common.sh@960 -- # wait 152760 00:49:43.677 19:42:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:49:43.677 19:42:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:49:43.677 00:49:43.677 real 0m6.823s 00:49:43.677 user 0m23.706s 00:49:43.677 sys 0m0.682s 00:49:43.677 19:42:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:43.677 ************************************ 00:49:43.677 19:42:59 -- common/autotest_common.sh@10 -- # set +x 00:49:43.677 END TEST bdev_nvme_reset_stuck_adm_cmd 00:49:43.677 ************************************ 00:49:43.677 19:42:59 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:49:43.677 19:42:59 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:49:43.677 19:42:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:43.677 19:42:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:43.677 19:42:59 -- common/autotest_common.sh@10 -- # set +x 00:49:43.677 ************************************ 00:49:43.677 START TEST nvme_fio 00:49:43.677 ************************************ 00:49:43.677 19:42:59 -- common/autotest_common.sh@1111 -- # nvme_fio_test 00:49:43.677 19:42:59 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:49:43.677 19:42:59 -- nvme/nvme.sh@32 -- # ran_fio=false 00:49:43.677 19:42:59 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:49:43.677 19:42:59 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:49:43.677 19:42:59 -- common/autotest_common.sh@1499 -- # bdfs=() 00:49:43.677 19:42:59 -- common/autotest_common.sh@1499 -- # local bdfs 00:49:43.677 19:42:59 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:49:43.677 19:42:59 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:49:43.677 19:42:59 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:49:43.677 19:42:59 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:49:43.677 19:42:59 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:49:43.677 19:42:59 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:49:43.677 19:42:59 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:49:43.677 19:42:59 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:49:43.677 19:42:59 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:49:43.677 19:42:59 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:49:43.677 19:42:59 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:49:43.936 19:42:59 -- nvme/nvme.sh@41 -- # bs=4096 00:49:43.936 19:42:59 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:49:43.936 19:42:59 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:49:43.936 19:42:59 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:49:43.936 19:42:59 -- common/autotest_common.sh@1325 -- # sanitizers=(libasan libclang_rt.asan) 00:49:43.936 19:42:59 -- common/autotest_common.sh@1325 -- # local sanitizers 00:49:43.936 19:42:59 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:49:43.936 19:42:59 -- common/autotest_common.sh@1327 -- # shift 00:49:43.936 19:42:59 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:49:43.936 19:42:59 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:49:43.936 19:42:59 -- common/autotest_common.sh@1331 -- # grep libasan 00:49:43.936 19:42:59 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:49:43.936 19:42:59 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:49:43.936 19:42:59 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:49:43.936 19:42:59 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:49:43.936 19:42:59 -- common/autotest_common.sh@1333 -- # break 00:49:43.936 19:42:59 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:49:43.936 19:42:59 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:49:44.194 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:49:44.194 fio-3.35 00:49:44.194 Starting 1 thread 00:49:47.474 00:49:47.474 test: (groupid=0, jobs=1): err= 0: pid=152979: Thu Apr 18 19:43:03 2024 00:49:47.474 read: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec) 00:49:47.474 slat (usec): min=4, max=272, avg= 5.67, stdev= 2.20 00:49:47.474 clat (usec): min=456, max=10485, avg=3564.24, stdev=831.33 00:49:47.474 lat (usec): min=463, max=10490, avg=3569.91, stdev=832.05 00:49:47.474 clat percentiles (usec): 00:49:47.474 | 1.00th=[ 1991], 5.00th=[ 2573], 10.00th=[ 2835], 20.00th=[ 3032], 00:49:47.474 | 30.00th=[ 3130], 40.00th=[ 3195], 50.00th=[ 3326], 60.00th=[ 3687], 00:49:47.474 | 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4424], 95.00th=[ 5276], 00:49:47.474 | 99.00th=[ 6521], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 9372], 00:49:47.474 | 99.99th=[10421] 00:49:47.474 bw ( KiB/s): min=66592, max=81096, per=100.00%, avg=74541.33, stdev=7351.89, samples=3 00:49:47.474 iops : min=16648, max=20274, avg=18635.33, stdev=1837.97, samples=3 00:49:47.474 write: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec); 0 zone resets 00:49:47.474 slat (usec): min=4, max=402, avg= 5.86, stdev= 2.61 00:49:47.474 clat (usec): min=346, max=10556, avg=3564.62, stdev=835.53 00:49:47.474 lat (usec): min=353, max=10561, avg=3570.48, stdev=836.30 00:49:47.474 clat percentiles (usec): 00:49:47.474 | 1.00th=[ 1975], 5.00th=[ 2573], 10.00th=[ 2835], 20.00th=[ 3032], 00:49:47.474 | 30.00th=[ 3097], 40.00th=[ 3195], 50.00th=[ 3326], 60.00th=[ 3687], 00:49:47.474 | 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4424], 95.00th=[ 5276], 00:49:47.474 | 99.00th=[ 6521], 99.50th=[ 7373], 99.90th=[ 8455], 99.95th=[ 9372], 00:49:47.474 | 99.99th=[10290] 00:49:47.474 bw ( KiB/s): min=66544, max=81408, per=100.00%, avg=74605.33, stdev=7511.51, samples=3 00:49:47.474 iops : min=16636, max=20352, avg=18651.33, stdev=1877.88, samples=3 00:49:47.474 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:49:47.474 lat (msec) : 2=1.01%, 4=79.91%, 10=19.02%, 20=0.03% 00:49:47.474 cpu : usr=99.50%, sys=0.35%, ctx=21, majf=0, minf=36 00:49:47.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:49:47.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:47.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:49:47.474 issued rwts: total=35800,35799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:47.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:49:47.474 00:49:47.474 Run status group 0 (all jobs): 00:49:47.474 READ: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:49:47.474 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:49:48.040 ----------------------------------------------------- 00:49:48.040 Suppressions used: 00:49:48.040 count bytes template 00:49:48.040 1 32 /usr/src/fio/parse.c 00:49:48.040 ----------------------------------------------------- 00:49:48.040 00:49:48.040 19:43:03 -- nvme/nvme.sh@44 -- # ran_fio=true 00:49:48.040 19:43:03 -- nvme/nvme.sh@46 -- # true 00:49:48.040 00:49:48.040 real 0m4.498s 00:49:48.040 user 0m3.702s 00:49:48.040 sys 0m0.469s 00:49:48.040 19:43:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:48.040 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:49:48.040 ************************************ 00:49:48.040 END TEST nvme_fio 00:49:48.040 ************************************ 00:49:48.040 ************************************ 00:49:48.040 END TEST nvme 00:49:48.040 ************************************ 00:49:48.040 00:49:48.040 real 0m49.654s 00:49:48.040 user 2m11.894s 00:49:48.040 sys 0m10.255s 00:49:48.040 19:43:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:48.040 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:49:48.040 19:43:03 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:49:48.040 19:43:03 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:49:48.040 19:43:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:48.040 19:43:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:48.040 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:49:48.040 ************************************ 00:49:48.040 START TEST nvme_scc 00:49:48.040 ************************************ 00:49:48.040 19:43:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:49:48.040 * Looking for test storage... 00:49:48.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:49:48.040 19:43:03 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:49:48.040 19:43:03 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:49:48.040 19:43:03 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:49:48.040 19:43:03 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:49:48.040 19:43:03 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:48.040 19:43:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:48.040 19:43:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:48.040 19:43:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:48.040 19:43:03 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:48.040 19:43:03 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:48.041 19:43:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:48.041 19:43:03 -- paths/export.sh@5 -- # export PATH 00:49:48.041 19:43:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:48.041 19:43:03 -- nvme/functions.sh@10 -- # ctrls=() 00:49:48.041 19:43:03 -- nvme/functions.sh@10 -- # declare -A ctrls 00:49:48.041 19:43:03 -- nvme/functions.sh@11 -- # nvmes=() 00:49:48.041 19:43:03 -- nvme/functions.sh@11 -- # declare -A nvmes 00:49:48.041 19:43:03 -- nvme/functions.sh@12 -- # bdfs=() 00:49:48.041 19:43:03 -- nvme/functions.sh@12 -- # declare -A bdfs 00:49:48.041 19:43:03 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:49:48.041 19:43:03 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:49:48.041 19:43:03 -- nvme/functions.sh@14 -- # nvme_name= 00:49:48.041 19:43:03 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:48.041 19:43:03 -- nvme/nvme_scc.sh@12 -- # uname 00:49:48.041 19:43:03 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:49:48.041 19:43:03 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:49:48.041 19:43:03 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:49:48.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:49:48.560 Waiting for block devices as requested 00:49:48.560 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:49:48.560 19:43:04 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:49:48.560 19:43:04 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:49:48.560 19:43:04 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:49:48.560 19:43:04 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:49:48.560 19:43:04 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:49:48.560 19:43:04 -- scripts/common.sh@15 -- # local i 00:49:48.560 19:43:04 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:49:48.560 19:43:04 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:49:48.560 19:43:04 -- scripts/common.sh@24 -- # return 0 00:49:48.560 19:43:04 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:49:48.560 19:43:04 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:49:48.560 19:43:04 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@18 -- # shift 00:49:48.560 19:43:04 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.560 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:49:48.560 19:43:04 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.560 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.561 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.561 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.561 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:49:48.562 19:43:04 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.562 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.562 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:49:48.563 19:43:04 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:49:48.563 19:43:04 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:49:48.563 19:43:04 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:49:48.563 19:43:04 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@18 -- # shift 00:49:48.563 19:43:04 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.563 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:49:48.563 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.563 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:49:48.564 19:43:04 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # IFS=: 00:49:48.564 19:43:04 -- nvme/functions.sh@21 -- # read -r reg val 00:49:48.564 19:43:04 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:49:48.564 19:43:04 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:49:48.564 19:43:04 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:49:48.564 19:43:04 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:49:48.564 19:43:04 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:49:48.564 19:43:04 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:49:48.564 19:43:04 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:49:48.564 19:43:04 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:49:48.564 19:43:04 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:49:48.564 19:43:04 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:49:48.564 19:43:04 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:49:48.564 19:43:04 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:49:48.823 19:43:04 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:49:48.823 19:43:04 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:49:48.823 19:43:04 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:49:48.823 19:43:04 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:49:48.823 19:43:04 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:49:48.823 19:43:04 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:49:48.823 19:43:04 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:49:48.823 19:43:04 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:49:48.823 19:43:04 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:49:48.823 19:43:04 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:49:48.823 19:43:04 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:49:48.823 19:43:04 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:49:48.823 19:43:04 -- nvme/functions.sh@76 -- # echo 0x15d 00:49:48.823 19:43:04 -- nvme/functions.sh@184 -- # oncs=0x15d 00:49:48.823 19:43:04 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:49:48.823 19:43:04 -- nvme/functions.sh@197 -- # echo nvme0 00:49:48.823 19:43:04 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:49:48.823 19:43:04 -- nvme/functions.sh@206 -- # echo nvme0 00:49:48.823 19:43:04 -- nvme/functions.sh@207 -- # return 0 00:49:48.823 19:43:04 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:49:48.823 19:43:04 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:49:48.823 19:43:04 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:49:49.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:49:49.081 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:49:50.013 19:43:05 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:49:50.013 19:43:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:49:50.013 19:43:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:50.013 19:43:05 -- common/autotest_common.sh@10 -- # set +x 00:49:50.271 ************************************ 00:49:50.271 START TEST nvme_simple_copy 00:49:50.271 ************************************ 00:49:50.271 19:43:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:49:50.530 Initializing NVMe Controllers 00:49:50.530 Attaching to 0000:00:10.0 00:49:50.530 Controller supports SCC. Attached to 0000:00:10.0 00:49:50.530 Namespace ID: 1 size: 5GB 00:49:50.530 Initialization complete. 00:49:50.530 00:49:50.530 Controller QEMU NVMe Ctrl (12340 ) 00:49:50.530 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:49:50.530 Namespace Block Size:4096 00:49:50.530 Writing LBAs 0 to 63 with Random Data 00:49:50.530 Copied LBAs from 0 - 63 to the Destination LBA 256 00:49:50.530 LBAs matching Written Data: 64 00:49:50.530 00:49:50.530 real 0m0.319s 00:49:50.530 user 0m0.140s 00:49:50.530 sys 0m0.079s 00:49:50.530 19:43:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:50.530 ************************************ 00:49:50.530 END TEST nvme_simple_copy 00:49:50.530 19:43:06 -- common/autotest_common.sh@10 -- # set +x 00:49:50.530 ************************************ 00:49:50.530 00:49:50.530 real 0m2.509s 00:49:50.530 user 0m0.754s 00:49:50.530 sys 0m1.664s 00:49:50.530 19:43:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:50.530 19:43:06 -- common/autotest_common.sh@10 -- # set +x 00:49:50.530 ************************************ 00:49:50.530 END TEST nvme_scc 00:49:50.530 ************************************ 00:49:50.530 19:43:06 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:49:50.530 19:43:06 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:49:50.530 19:43:06 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:49:50.530 19:43:06 -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]] 00:49:50.530 19:43:06 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:49:50.530 19:43:06 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:49:50.530 19:43:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:50.530 19:43:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:50.530 19:43:06 -- common/autotest_common.sh@10 -- # set +x 00:49:50.530 ************************************ 00:49:50.530 START TEST nvme_rpc 00:49:50.530 ************************************ 00:49:50.530 19:43:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:49:50.790 * Looking for test storage... 00:49:50.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:49:50.790 19:43:06 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:50.790 19:43:06 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:49:50.790 19:43:06 -- common/autotest_common.sh@1510 -- # bdfs=() 00:49:50.790 19:43:06 -- common/autotest_common.sh@1510 -- # local bdfs 00:49:50.790 19:43:06 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:49:50.790 19:43:06 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:49:50.790 19:43:06 -- common/autotest_common.sh@1499 -- # bdfs=() 00:49:50.790 19:43:06 -- common/autotest_common.sh@1499 -- # local bdfs 00:49:50.790 19:43:06 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:49:50.790 19:43:06 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:49:50.790 19:43:06 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:49:50.790 19:43:06 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:49:50.790 19:43:06 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:49:50.790 19:43:06 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:49:50.790 19:43:06 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:49:50.790 19:43:06 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=153477 00:49:50.790 19:43:06 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:49:50.790 19:43:06 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:49:50.790 19:43:06 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 153477 00:49:50.790 19:43:06 -- common/autotest_common.sh@817 -- # '[' -z 153477 ']' 00:49:50.790 19:43:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:50.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:50.790 19:43:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:49:50.790 19:43:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:50.790 19:43:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:49:50.790 19:43:06 -- common/autotest_common.sh@10 -- # set +x 00:49:50.790 [2024-04-18 19:43:06.682683] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:49:50.790 [2024-04-18 19:43:06.682884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153477 ] 00:49:51.048 [2024-04-18 19:43:06.870234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:49:51.305 [2024-04-18 19:43:07.143233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:51.305 [2024-04-18 19:43:07.143243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:52.681 19:43:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:49:52.681 19:43:08 -- common/autotest_common.sh@850 -- # return 0 00:49:52.681 19:43:08 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:49:52.681 Nvme0n1 00:49:52.681 19:43:08 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:49:52.681 19:43:08 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:49:52.939 request: 00:49:52.939 { 00:49:52.939 "filename": "non_existing_file", 00:49:52.939 "bdev_name": "Nvme0n1", 00:49:52.939 "method": "bdev_nvme_apply_firmware", 00:49:52.939 "req_id": 1 00:49:52.939 } 00:49:52.939 Got JSON-RPC error response 00:49:52.939 response: 00:49:52.939 { 00:49:52.939 "code": -32603, 00:49:52.939 "message": "open file failed." 00:49:52.939 } 00:49:52.939 19:43:08 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:49:52.939 19:43:08 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:49:52.939 19:43:08 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:49:53.197 19:43:09 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:49:53.197 19:43:09 -- nvme/nvme_rpc.sh@40 -- # killprocess 153477 00:49:53.197 19:43:09 -- common/autotest_common.sh@936 -- # '[' -z 153477 ']' 00:49:53.197 19:43:09 -- common/autotest_common.sh@940 -- # kill -0 153477 00:49:53.197 19:43:09 -- common/autotest_common.sh@941 -- # uname 00:49:53.197 19:43:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:49:53.197 19:43:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 153477 00:49:53.197 19:43:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:49:53.197 19:43:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:49:53.197 killing process with pid 153477 00:49:53.197 19:43:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 153477' 00:49:53.197 19:43:09 -- common/autotest_common.sh@955 -- # kill 153477 00:49:53.197 19:43:09 -- common/autotest_common.sh@960 -- # wait 153477 00:49:56.481 00:49:56.481 real 0m5.336s 00:49:56.481 user 0m10.062s 00:49:56.481 sys 0m0.610s 00:49:56.481 19:43:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:49:56.481 ************************************ 00:49:56.481 END TEST nvme_rpc 00:49:56.481 ************************************ 00:49:56.481 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:49:56.481 19:43:11 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:49:56.481 19:43:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:49:56.481 19:43:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:49:56.481 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:49:56.481 ************************************ 00:49:56.481 START TEST nvme_rpc_timeouts 00:49:56.481 ************************************ 00:49:56.481 19:43:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:49:56.481 * Looking for test storage... 00:49:56.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:49:56.481 19:43:11 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:56.481 19:43:11 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_153596 00:49:56.481 19:43:11 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_153596 00:49:56.481 19:43:11 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=153623 00:49:56.481 19:43:11 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:49:56.481 19:43:11 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:49:56.481 19:43:11 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 153623 00:49:56.481 19:43:11 -- common/autotest_common.sh@817 -- # '[' -z 153623 ']' 00:49:56.481 19:43:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:56.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:56.481 19:43:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:49:56.481 19:43:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:56.481 19:43:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:49:56.481 19:43:11 -- common/autotest_common.sh@10 -- # set +x 00:49:56.481 [2024-04-18 19:43:12.000802] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:49:56.481 [2024-04-18 19:43:12.001218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153623 ] 00:49:56.481 [2024-04-18 19:43:12.182607] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:49:56.738 [2024-04-18 19:43:12.436594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:56.738 [2024-04-18 19:43:12.436604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:57.703 19:43:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:49:57.703 19:43:13 -- common/autotest_common.sh@850 -- # return 0 00:49:57.703 Checking default timeout settings: 00:49:57.703 19:43:13 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:49:57.703 19:43:13 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:49:57.960 Making settings changes with rpc: 00:49:57.960 19:43:13 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:49:57.960 19:43:13 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:49:58.217 Check default vs. modified settings: 00:49:58.217 19:43:14 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:49:58.217 19:43:14 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_153596 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_153596 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:49:58.782 Setting action_on_timeout is changed as expected. 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_153596 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_153596 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:49:58.782 Setting timeout_us is changed as expected. 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_153596 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_153596 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:49:58.782 Setting timeout_admin_us is changed as expected. 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_153596 /tmp/settings_modified_153596 00:49:58.782 19:43:14 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 153623 00:49:58.782 19:43:14 -- common/autotest_common.sh@936 -- # '[' -z 153623 ']' 00:49:58.782 19:43:14 -- common/autotest_common.sh@940 -- # kill -0 153623 00:49:58.782 19:43:14 -- common/autotest_common.sh@941 -- # uname 00:49:58.782 19:43:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:49:58.782 19:43:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 153623 00:49:58.782 19:43:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:49:58.782 killing process with pid 153623 00:49:58.782 19:43:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:49:58.783 19:43:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 153623' 00:49:58.783 19:43:14 -- common/autotest_common.sh@955 -- # kill 153623 00:49:58.783 19:43:14 -- common/autotest_common.sh@960 -- # wait 153623 00:50:02.127 RPC TIMEOUT SETTING TEST PASSED. 00:50:02.127 19:43:17 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:50:02.127 ************************************ 00:50:02.127 END TEST nvme_rpc_timeouts 00:50:02.127 ************************************ 00:50:02.127 00:50:02.127 real 0m5.595s 00:50:02.127 user 0m10.770s 00:50:02.127 sys 0m0.640s 00:50:02.127 19:43:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:02.127 19:43:17 -- common/autotest_common.sh@10 -- # set +x 00:50:02.127 19:43:17 -- spdk/autotest.sh@241 -- # '[' 1 -eq 0 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@245 -- # [[ 0 -eq 1 ]] 00:50:02.127 19:43:17 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@258 -- # timing_exit lib 00:50:02.127 19:43:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:50:02.127 19:43:17 -- common/autotest_common.sh@10 -- # set +x 00:50:02.127 19:43:17 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@277 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:50:02.127 19:43:17 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:50:02.128 19:43:17 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:50:02.128 19:43:17 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:50:02.128 19:43:17 -- spdk/autotest.sh@373 -- # [[ 1 -eq 1 ]] 00:50:02.128 19:43:17 -- spdk/autotest.sh@374 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:50:02.128 19:43:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:50:02.128 19:43:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:02.128 19:43:17 -- common/autotest_common.sh@10 -- # set +x 00:50:02.128 ************************************ 00:50:02.128 START TEST blockdev_raid5f 00:50:02.128 ************************************ 00:50:02.128 19:43:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:50:02.128 * Looking for test storage... 00:50:02.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:50:02.128 19:43:17 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:50:02.128 19:43:17 -- bdev/nbd_common.sh@6 -- # set -e 00:50:02.128 19:43:17 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:50:02.128 19:43:17 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:50:02.128 19:43:17 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:50:02.128 19:43:17 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:50:02.128 19:43:17 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:50:02.128 19:43:17 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:50:02.128 19:43:17 -- bdev/blockdev.sh@20 -- # : 00:50:02.128 19:43:17 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:50:02.128 19:43:17 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:50:02.128 19:43:17 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:50:02.128 19:43:17 -- bdev/blockdev.sh@674 -- # uname -s 00:50:02.128 19:43:17 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:50:02.128 19:43:17 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:50:02.128 19:43:17 -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:50:02.128 19:43:17 -- bdev/blockdev.sh@683 -- # crypto_device= 00:50:02.128 19:43:17 -- bdev/blockdev.sh@684 -- # dek= 00:50:02.128 19:43:17 -- bdev/blockdev.sh@685 -- # env_ctx= 00:50:02.128 19:43:17 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:50:02.128 19:43:17 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:50:02.128 19:43:17 -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:50:02.128 19:43:17 -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:50:02.128 19:43:17 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:50:02.128 19:43:17 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=153807 00:50:02.128 19:43:17 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:50:02.128 19:43:17 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:50:02.128 19:43:17 -- bdev/blockdev.sh@49 -- # waitforlisten 153807 00:50:02.128 19:43:17 -- common/autotest_common.sh@817 -- # '[' -z 153807 ']' 00:50:02.128 19:43:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:02.128 19:43:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:50:02.128 19:43:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:02.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:02.128 19:43:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:50:02.128 19:43:17 -- common/autotest_common.sh@10 -- # set +x 00:50:02.128 [2024-04-18 19:43:17.737867] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:50:02.128 [2024-04-18 19:43:17.738273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153807 ] 00:50:02.128 [2024-04-18 19:43:17.918395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:02.386 [2024-04-18 19:43:18.228058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:03.761 19:43:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:50:03.761 19:43:19 -- common/autotest_common.sh@850 -- # return 0 00:50:03.761 19:43:19 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:50:03.761 19:43:19 -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:50:03.761 19:43:19 -- bdev/blockdev.sh@280 -- # rpc_cmd 00:50:03.761 19:43:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:50:03.761 19:43:19 -- common/autotest_common.sh@10 -- # set +x 00:50:03.761 Malloc0 00:50:03.761 Malloc1 00:50:03.761 Malloc2 00:50:03.761 19:43:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:50:03.761 19:43:19 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:50:03.761 19:43:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:50:03.761 19:43:19 -- common/autotest_common.sh@10 -- # set +x 00:50:03.761 19:43:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:50:03.761 19:43:19 -- bdev/blockdev.sh@740 -- # cat 00:50:03.761 19:43:19 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:50:03.761 19:43:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:50:03.761 19:43:19 -- common/autotest_common.sh@10 -- # set +x 00:50:03.761 19:43:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:50:03.761 19:43:19 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:50:03.761 19:43:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:50:03.761 19:43:19 -- common/autotest_common.sh@10 -- # set +x 00:50:03.761 19:43:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:50:03.761 19:43:19 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:50:03.761 19:43:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:50:03.761 19:43:19 -- common/autotest_common.sh@10 -- # set +x 00:50:03.761 19:43:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:50:03.761 19:43:19 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:50:03.761 19:43:19 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:50:03.761 19:43:19 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:50:03.761 19:43:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:50:03.761 19:43:19 -- common/autotest_common.sh@10 -- # set +x 00:50:03.761 19:43:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:50:03.761 19:43:19 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:50:03.761 19:43:19 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6303c66d-701f-4913-815c-876ec11a991d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6303c66d-701f-4913-815c-876ec11a991d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6303c66d-701f-4913-815c-876ec11a991d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "a8a56674-8947-4a5b-bf4d-a6e6cf9fab34",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5d21b704-2cbd-4310-ae13-a5ec41710000",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "61333468-94ec-46d1-bdc4-b6a12afaedea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:50:03.761 19:43:19 -- bdev/blockdev.sh@749 -- # jq -r .name 00:50:04.020 19:43:19 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:50:04.020 19:43:19 -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:50:04.020 19:43:19 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:50:04.020 19:43:19 -- bdev/blockdev.sh@754 -- # killprocess 153807 00:50:04.020 19:43:19 -- common/autotest_common.sh@936 -- # '[' -z 153807 ']' 00:50:04.020 19:43:19 -- common/autotest_common.sh@940 -- # kill -0 153807 00:50:04.020 19:43:19 -- common/autotest_common.sh@941 -- # uname 00:50:04.020 19:43:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:50:04.020 19:43:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 153807 00:50:04.020 19:43:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:50:04.020 19:43:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:50:04.020 19:43:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 153807' 00:50:04.020 killing process with pid 153807 00:50:04.020 19:43:19 -- common/autotest_common.sh@955 -- # kill 153807 00:50:04.020 19:43:19 -- common/autotest_common.sh@960 -- # wait 153807 00:50:07.339 19:43:22 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:50:07.339 19:43:22 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:50:07.339 19:43:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:50:07.339 19:43:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:07.339 19:43:22 -- common/autotest_common.sh@10 -- # set +x 00:50:07.339 ************************************ 00:50:07.339 START TEST bdev_hello_world 00:50:07.339 ************************************ 00:50:07.339 19:43:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:50:07.339 [2024-04-18 19:43:23.026139] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:50:07.339 [2024-04-18 19:43:23.026490] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153902 ] 00:50:07.339 [2024-04-18 19:43:23.187191] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:07.597 [2024-04-18 19:43:23.421679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:08.164 [2024-04-18 19:43:24.049136] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:50:08.164 [2024-04-18 19:43:24.049844] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:50:08.164 [2024-04-18 19:43:24.050138] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:50:08.164 [2024-04-18 19:43:24.050942] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:50:08.164 [2024-04-18 19:43:24.051327] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:50:08.164 [2024-04-18 19:43:24.051630] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:50:08.164 [2024-04-18 19:43:24.051966] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:50:08.164 00:50:08.165 [2024-04-18 19:43:24.052245] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:50:10.100 ************************************ 00:50:10.100 END TEST bdev_hello_world 00:50:10.100 ************************************ 00:50:10.100 00:50:10.100 real 0m2.910s 00:50:10.100 user 0m2.541s 00:50:10.100 sys 0m0.253s 00:50:10.100 19:43:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:10.100 19:43:25 -- common/autotest_common.sh@10 -- # set +x 00:50:10.100 19:43:25 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:50:10.100 19:43:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:50:10.100 19:43:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:10.100 19:43:25 -- common/autotest_common.sh@10 -- # set +x 00:50:10.100 ************************************ 00:50:10.100 START TEST bdev_bounds 00:50:10.100 ************************************ 00:50:10.100 Process bdevio pid: 153963 00:50:10.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:10.100 19:43:25 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:50:10.100 19:43:25 -- bdev/blockdev.sh@290 -- # bdevio_pid=153963 00:50:10.100 19:43:25 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:50:10.100 19:43:25 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 153963' 00:50:10.100 19:43:25 -- bdev/blockdev.sh@293 -- # waitforlisten 153963 00:50:10.100 19:43:25 -- common/autotest_common.sh@817 -- # '[' -z 153963 ']' 00:50:10.100 19:43:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:10.100 19:43:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:50:10.100 19:43:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:10.100 19:43:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:50:10.100 19:43:25 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:50:10.100 19:43:25 -- common/autotest_common.sh@10 -- # set +x 00:50:10.100 [2024-04-18 19:43:26.022737] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:50:10.100 [2024-04-18 19:43:26.023044] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153963 ] 00:50:10.358 [2024-04-18 19:43:26.202775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:50:10.616 [2024-04-18 19:43:26.510309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:50:10.616 [2024-04-18 19:43:26.510477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:10.616 [2024-04-18 19:43:26.510474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:50:11.550 19:43:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:50:11.550 19:43:27 -- common/autotest_common.sh@850 -- # return 0 00:50:11.550 19:43:27 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:50:11.550 I/O targets: 00:50:11.550 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:50:11.550 00:50:11.550 00:50:11.550 CUnit - A unit testing framework for C - Version 2.1-3 00:50:11.550 http://cunit.sourceforge.net/ 00:50:11.550 00:50:11.550 00:50:11.550 Suite: bdevio tests on: raid5f 00:50:11.550 Test: blockdev write read block ...passed 00:50:11.550 Test: blockdev write zeroes read block ...passed 00:50:11.550 Test: blockdev write zeroes read no split ...passed 00:50:11.550 Test: blockdev write zeroes read split ...passed 00:50:11.809 Test: blockdev write zeroes read split partial ...passed 00:50:11.809 Test: blockdev reset ...passed 00:50:11.809 Test: blockdev write read 8 blocks ...passed 00:50:11.809 Test: blockdev write read size > 128k ...passed 00:50:11.809 Test: blockdev write read invalid size ...passed 00:50:11.809 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:50:11.809 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:50:11.809 Test: blockdev write read max offset ...passed 00:50:11.809 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:50:11.809 Test: blockdev writev readv 8 blocks ...passed 00:50:11.809 Test: blockdev writev readv 30 x 1block ...passed 00:50:11.809 Test: blockdev writev readv block ...passed 00:50:11.809 Test: blockdev writev readv size > 128k ...passed 00:50:11.809 Test: blockdev writev readv size > 128k in two iovs ...passed 00:50:11.809 Test: blockdev comparev and writev ...passed 00:50:11.809 Test: blockdev nvme passthru rw ...passed 00:50:11.809 Test: blockdev nvme passthru vendor specific ...passed 00:50:11.809 Test: blockdev nvme admin passthru ...passed 00:50:11.809 Test: blockdev copy ...passed 00:50:11.809 00:50:11.809 Run Summary: Type Total Ran Passed Failed Inactive 00:50:11.809 suites 1 1 n/a 0 0 00:50:11.809 tests 23 23 23 0 0 00:50:11.809 asserts 130 130 130 0 n/a 00:50:11.809 00:50:11.809 Elapsed time = 0.611 seconds 00:50:11.809 0 00:50:11.809 19:43:27 -- bdev/blockdev.sh@295 -- # killprocess 153963 00:50:11.809 19:43:27 -- common/autotest_common.sh@936 -- # '[' -z 153963 ']' 00:50:11.809 19:43:27 -- common/autotest_common.sh@940 -- # kill -0 153963 00:50:11.809 19:43:27 -- common/autotest_common.sh@941 -- # uname 00:50:11.809 19:43:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:50:11.809 19:43:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 153963 00:50:11.809 19:43:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:50:11.809 killing process with pid 153963 00:50:11.809 19:43:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:50:11.809 19:43:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 153963' 00:50:11.809 19:43:27 -- common/autotest_common.sh@955 -- # kill 153963 00:50:11.809 19:43:27 -- common/autotest_common.sh@960 -- # wait 153963 00:50:13.755 ************************************ 00:50:13.755 END TEST bdev_bounds 00:50:13.755 ************************************ 00:50:13.755 19:43:29 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:50:13.755 00:50:13.755 real 0m3.622s 00:50:13.755 user 0m8.515s 00:50:13.755 sys 0m0.401s 00:50:13.755 19:43:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:13.755 19:43:29 -- common/autotest_common.sh@10 -- # set +x 00:50:13.755 19:43:29 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:50:13.755 19:43:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:50:13.755 19:43:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:13.755 19:43:29 -- common/autotest_common.sh@10 -- # set +x 00:50:13.755 ************************************ 00:50:13.755 START TEST bdev_nbd 00:50:13.755 ************************************ 00:50:13.755 19:43:29 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:50:13.755 19:43:29 -- bdev/blockdev.sh@300 -- # uname -s 00:50:13.755 19:43:29 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:50:13.755 19:43:29 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:13.755 19:43:29 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:50:14.014 19:43:29 -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:50:14.014 19:43:29 -- bdev/blockdev.sh@304 -- # local bdev_all 00:50:14.014 19:43:29 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:50:14.014 19:43:29 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:50:14.014 19:43:29 -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:50:14.014 19:43:29 -- bdev/blockdev.sh@311 -- # local nbd_all 00:50:14.014 19:43:29 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:50:14.014 19:43:29 -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:50:14.014 19:43:29 -- bdev/blockdev.sh@314 -- # local nbd_list 00:50:14.014 19:43:29 -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:50:14.014 19:43:29 -- bdev/blockdev.sh@315 -- # local bdev_list 00:50:14.014 19:43:29 -- bdev/blockdev.sh@318 -- # nbd_pid=154059 00:50:14.014 19:43:29 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:50:14.014 19:43:29 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:50:14.014 19:43:29 -- bdev/blockdev.sh@320 -- # waitforlisten 154059 /var/tmp/spdk-nbd.sock 00:50:14.014 19:43:29 -- common/autotest_common.sh@817 -- # '[' -z 154059 ']' 00:50:14.014 19:43:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:50:14.014 19:43:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:50:14.014 19:43:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:50:14.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:50:14.014 19:43:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:50:14.014 19:43:29 -- common/autotest_common.sh@10 -- # set +x 00:50:14.014 [2024-04-18 19:43:29.740807] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:50:14.014 [2024-04-18 19:43:29.741128] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:14.014 [2024-04-18 19:43:29.905201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:14.272 [2024-04-18 19:43:30.143533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:14.880 19:43:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:50:14.880 19:43:30 -- common/autotest_common.sh@850 -- # return 0 00:50:14.880 19:43:30 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@24 -- # local i 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:50:14.880 19:43:30 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:50:15.140 19:43:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:50:15.140 19:43:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:50:15.140 19:43:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:50:15.140 19:43:31 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:50:15.140 19:43:31 -- common/autotest_common.sh@855 -- # local i 00:50:15.140 19:43:31 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:50:15.140 19:43:31 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:50:15.140 19:43:31 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:50:15.140 19:43:31 -- common/autotest_common.sh@859 -- # break 00:50:15.140 19:43:31 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:50:15.140 19:43:31 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:50:15.140 19:43:31 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:50:15.140 1+0 records in 00:50:15.140 1+0 records out 00:50:15.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650911 s, 6.3 MB/s 00:50:15.140 19:43:31 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:15.140 19:43:31 -- common/autotest_common.sh@872 -- # size=4096 00:50:15.140 19:43:31 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:15.140 19:43:31 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:50:15.140 19:43:31 -- common/autotest_common.sh@875 -- # return 0 00:50:15.140 19:43:31 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:50:15.140 19:43:31 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:50:15.140 19:43:31 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:50:15.401 { 00:50:15.401 "nbd_device": "/dev/nbd0", 00:50:15.401 "bdev_name": "raid5f" 00:50:15.401 } 00:50:15.401 ]' 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@119 -- # echo '[ 00:50:15.401 { 00:50:15.401 "nbd_device": "/dev/nbd0", 00:50:15.401 "bdev_name": "raid5f" 00:50:15.401 } 00:50:15.401 ]' 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@51 -- # local i 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:50:15.401 19:43:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@41 -- # break 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@45 -- # return 0 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:15.663 19:43:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@65 -- # true 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@65 -- # count=0 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@122 -- # count=0 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@127 -- # return 0 00:50:16.231 19:43:31 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@12 -- # local i 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:50:16.231 19:43:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:50:16.231 /dev/nbd0 00:50:16.231 19:43:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:50:16.231 19:43:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:50:16.231 19:43:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:50:16.231 19:43:32 -- common/autotest_common.sh@855 -- # local i 00:50:16.231 19:43:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:50:16.231 19:43:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:50:16.231 19:43:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:50:16.231 19:43:32 -- common/autotest_common.sh@859 -- # break 00:50:16.231 19:43:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:50:16.231 19:43:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:50:16.231 19:43:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:50:16.231 1+0 records in 00:50:16.231 1+0 records out 00:50:16.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570715 s, 7.2 MB/s 00:50:16.489 19:43:32 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:16.489 19:43:32 -- common/autotest_common.sh@872 -- # size=4096 00:50:16.489 19:43:32 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:16.489 19:43:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:50:16.489 19:43:32 -- common/autotest_common.sh@875 -- # return 0 00:50:16.489 19:43:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:50:16.489 19:43:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:50:16.489 19:43:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:50:16.489 19:43:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:16.489 19:43:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:16.489 19:43:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:50:16.489 { 00:50:16.489 "nbd_device": "/dev/nbd0", 00:50:16.489 "bdev_name": "raid5f" 00:50:16.489 } 00:50:16.489 ]' 00:50:16.489 19:43:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:50:16.489 { 00:50:16.489 "nbd_device": "/dev/nbd0", 00:50:16.489 "bdev_name": "raid5f" 00:50:16.489 } 00:50:16.489 ]' 00:50:16.489 19:43:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:50:16.747 19:43:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@65 -- # count=1 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@66 -- # echo 1 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@95 -- # count=1 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:50:16.748 256+0 records in 00:50:16.748 256+0 records out 00:50:16.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00912438 s, 115 MB/s 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:50:16.748 256+0 records in 00:50:16.748 256+0 records out 00:50:16.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349687 s, 30.0 MB/s 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@51 -- # local i 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:50:16.748 19:43:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@41 -- # break 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@45 -- # return 0 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:17.006 19:43:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:17.265 19:43:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:50:17.265 19:43:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:50:17.265 19:43:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@65 -- # true 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@65 -- # count=0 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@104 -- # count=0 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@109 -- # return 0 00:50:17.265 19:43:33 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:50:17.265 19:43:33 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:50:17.522 malloc_lvol_verify 00:50:17.522 19:43:33 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:50:17.779 92f22fdb-70f6-48b4-a014-6e6348bb49e2 00:50:17.779 19:43:33 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:50:18.035 30d8745d-3826-46de-97db-cb394d265020 00:50:18.035 19:43:33 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:50:18.292 /dev/nbd0 00:50:18.292 19:43:34 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:50:18.293 mke2fs 1.45.5 (07-Jan-2020) 00:50:18.293 00:50:18.293 Filesystem too small for a journal 00:50:18.293 Creating filesystem with 1024 4k blocks and 1024 inodes 00:50:18.293 00:50:18.293 Allocating group tables: 0/1 done 00:50:18.293 Writing inode tables: 0/1 done 00:50:18.293 Writing superblocks and filesystem accounting information: 0/1 done 00:50:18.293 00:50:18.293 19:43:34 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:50:18.293 19:43:34 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:18.293 19:43:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:18.293 19:43:34 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:50:18.293 19:43:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:50:18.293 19:43:34 -- bdev/nbd_common.sh@51 -- # local i 00:50:18.293 19:43:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:50:18.293 19:43:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@41 -- # break 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@45 -- # return 0 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:50:18.549 19:43:34 -- bdev/nbd_common.sh@147 -- # return 0 00:50:18.549 19:43:34 -- bdev/blockdev.sh@326 -- # killprocess 154059 00:50:18.549 19:43:34 -- common/autotest_common.sh@936 -- # '[' -z 154059 ']' 00:50:18.549 19:43:34 -- common/autotest_common.sh@940 -- # kill -0 154059 00:50:18.549 19:43:34 -- common/autotest_common.sh@941 -- # uname 00:50:18.550 19:43:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:50:18.550 19:43:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 154059 00:50:18.550 killing process with pid 154059 00:50:18.550 19:43:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:50:18.550 19:43:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:50:18.550 19:43:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 154059' 00:50:18.550 19:43:34 -- common/autotest_common.sh@955 -- # kill 154059 00:50:18.550 19:43:34 -- common/autotest_common.sh@960 -- # wait 154059 00:50:20.467 ************************************ 00:50:20.467 END TEST bdev_nbd 00:50:20.467 ************************************ 00:50:20.467 19:43:36 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:50:20.467 00:50:20.467 real 0m6.445s 00:50:20.467 user 0m8.835s 00:50:20.467 sys 0m1.394s 00:50:20.467 19:43:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:20.467 19:43:36 -- common/autotest_common.sh@10 -- # set +x 00:50:20.467 19:43:36 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:50:20.467 19:43:36 -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:50:20.467 19:43:36 -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:50:20.467 19:43:36 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:20.467 19:43:36 -- common/autotest_common.sh@10 -- # set +x 00:50:20.467 ************************************ 00:50:20.467 START TEST bdev_fio 00:50:20.467 ************************************ 00:50:20.467 19:43:36 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:50:20.467 19:43:36 -- bdev/blockdev.sh@331 -- # local env_context 00:50:20.467 19:43:36 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:50:20.467 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:50:20.467 19:43:36 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:50:20.467 19:43:36 -- bdev/blockdev.sh@339 -- # echo '' 00:50:20.467 19:43:36 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:50:20.467 19:43:36 -- bdev/blockdev.sh@339 -- # env_context= 00:50:20.467 19:43:36 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:20.467 19:43:36 -- common/autotest_common.sh@1267 -- # local workload=verify 00:50:20.467 19:43:36 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:50:20.467 19:43:36 -- common/autotest_common.sh@1269 -- # local env_context= 00:50:20.467 19:43:36 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:50:20.467 19:43:36 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:20.467 19:43:36 -- common/autotest_common.sh@1287 -- # cat 00:50:20.467 19:43:36 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1300 -- # cat 00:50:20.467 19:43:36 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:50:20.467 19:43:36 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:50:20.467 19:43:36 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:50:20.467 19:43:36 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:50:20.467 19:43:36 -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:50:20.467 19:43:36 -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:50:20.467 19:43:36 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:50:20.467 19:43:36 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:20.467 19:43:36 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:20.467 19:43:36 -- common/autotest_common.sh@10 -- # set +x 00:50:20.467 ************************************ 00:50:20.467 START TEST bdev_fio_rw_verify 00:50:20.467 ************************************ 00:50:20.467 19:43:36 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:20.467 19:43:36 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:20.467 19:43:36 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:50:20.467 19:43:36 -- common/autotest_common.sh@1325 -- # sanitizers=(libasan libclang_rt.asan) 00:50:20.467 19:43:36 -- common/autotest_common.sh@1325 -- # local sanitizers 00:50:20.467 19:43:36 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:20.467 19:43:36 -- common/autotest_common.sh@1327 -- # shift 00:50:20.467 19:43:36 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:50:20.467 19:43:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:50:20.467 19:43:36 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:20.467 19:43:36 -- common/autotest_common.sh@1331 -- # grep libasan 00:50:20.467 19:43:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:50:20.467 19:43:36 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:50:20.467 19:43:36 -- common/autotest_common.sh@1333 -- # break 00:50:20.467 19:43:36 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:50:20.467 19:43:36 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:20.731 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:50:20.731 fio-3.35 00:50:20.731 Starting 1 thread 00:50:32.925 00:50:32.925 job_raid5f: (groupid=0, jobs=1): err= 0: pid=154315: Thu Apr 18 19:43:47 2024 00:50:32.925 read: IOPS=8199, BW=32.0MiB/s (33.6MB/s)(320MiB/10001msec) 00:50:32.925 slat (usec): min=20, max=121, avg=28.91, stdev= 7.88 00:50:32.925 clat (usec): min=13, max=900, avg=192.51, stdev=79.61 00:50:32.925 lat (usec): min=40, max=981, avg=221.42, stdev=83.06 00:50:32.925 clat percentiles (usec): 00:50:32.925 | 50.000th=[ 192], 99.000th=[ 429], 99.900th=[ 709], 99.990th=[ 816], 00:50:32.925 | 99.999th=[ 898] 00:50:32.925 write: IOPS=8582, BW=33.5MiB/s (35.2MB/s)(330MiB/9853msec); 0 zone resets 00:50:32.925 slat (usec): min=10, max=2880, avg=25.87, stdev=12.12 00:50:32.925 clat (usec): min=78, max=3372, avg=438.59, stdev=106.17 00:50:32.925 lat (usec): min=101, max=3395, avg=464.46, stdev=111.02 00:50:32.925 clat percentiles (usec): 00:50:32.925 | 50.000th=[ 429], 99.000th=[ 840], 99.900th=[ 1090], 99.990th=[ 1467], 00:50:32.925 | 99.999th=[ 3359] 00:50:32.925 bw ( KiB/s): min=21152, max=40576, per=98.55%, avg=33831.16, stdev=5141.34, samples=19 00:50:32.925 iops : min= 5288, max=10144, avg=8457.79, stdev=1285.33, samples=19 00:50:32.925 lat (usec) : 20=0.01%, 100=5.56%, 250=33.37%, 500=53.99%, 750=6.00% 00:50:32.925 lat (usec) : 1000=0.91% 00:50:32.925 lat (msec) : 2=0.16%, 4=0.01% 00:50:32.926 cpu : usr=99.72%, sys=0.21%, ctx=83, majf=0, minf=5807 00:50:32.926 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:50:32.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:32.926 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:32.926 issued rwts: total=82000,84559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:32.926 latency : target=0, window=0, percentile=100.00%, depth=8 00:50:32.926 00:50:32.926 Run status group 0 (all jobs): 00:50:32.926 READ: bw=32.0MiB/s (33.6MB/s), 32.0MiB/s-32.0MiB/s (33.6MB/s-33.6MB/s), io=320MiB (336MB), run=10001-10001msec 00:50:32.926 WRITE: bw=33.5MiB/s (35.2MB/s), 33.5MiB/s-33.5MiB/s (35.2MB/s-35.2MB/s), io=330MiB (346MB), run=9853-9853msec 00:50:33.917 ----------------------------------------------------- 00:50:33.917 Suppressions used: 00:50:33.917 count bytes template 00:50:33.917 1 7 /usr/src/fio/parse.c 00:50:33.917 20 1920 /usr/src/fio/iolog.c 00:50:33.917 2 596 libcrypto.so 00:50:33.917 ----------------------------------------------------- 00:50:33.917 00:50:33.917 ************************************ 00:50:33.917 END TEST bdev_fio_rw_verify 00:50:33.917 ************************************ 00:50:33.917 00:50:33.917 real 0m13.305s 00:50:33.917 user 0m14.530s 00:50:33.917 sys 0m0.775s 00:50:33.917 19:43:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:33.917 19:43:49 -- common/autotest_common.sh@10 -- # set +x 00:50:33.917 19:43:49 -- bdev/blockdev.sh@350 -- # rm -f 00:50:33.918 19:43:49 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:33.918 19:43:49 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:50:33.918 19:43:49 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:33.918 19:43:49 -- common/autotest_common.sh@1267 -- # local workload=trim 00:50:33.918 19:43:49 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:50:33.918 19:43:49 -- common/autotest_common.sh@1269 -- # local env_context= 00:50:33.918 19:43:49 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:50:33.918 19:43:49 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:50:33.918 19:43:49 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:50:33.918 19:43:49 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:50:33.918 19:43:49 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:33.918 19:43:49 -- common/autotest_common.sh@1287 -- # cat 00:50:33.918 19:43:49 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:50:33.918 19:43:49 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:50:33.918 19:43:49 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:50:33.918 19:43:49 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:50:33.918 19:43:49 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6303c66d-701f-4913-815c-876ec11a991d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6303c66d-701f-4913-815c-876ec11a991d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6303c66d-701f-4913-815c-876ec11a991d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "a8a56674-8947-4a5b-bf4d-a6e6cf9fab34",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5d21b704-2cbd-4310-ae13-a5ec41710000",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "61333468-94ec-46d1-bdc4-b6a12afaedea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:50:33.918 19:43:49 -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:50:33.918 19:43:49 -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:33.918 19:43:49 -- bdev/blockdev.sh@362 -- # popd 00:50:33.918 /home/vagrant/spdk_repo/spdk 00:50:33.918 19:43:49 -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:50:33.918 19:43:49 -- bdev/blockdev.sh@364 -- # return 0 00:50:33.918 00:50:33.918 real 0m13.526s 00:50:33.918 user 0m14.676s 00:50:33.918 sys 0m0.848s 00:50:33.918 19:43:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:33.918 19:43:49 -- common/autotest_common.sh@10 -- # set +x 00:50:33.918 ************************************ 00:50:33.918 END TEST bdev_fio 00:50:33.918 ************************************ 00:50:33.918 19:43:49 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:50:33.918 19:43:49 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:50:33.918 19:43:49 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:50:33.918 19:43:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:33.918 19:43:49 -- common/autotest_common.sh@10 -- # set +x 00:50:33.918 ************************************ 00:50:33.918 START TEST bdev_verify 00:50:33.918 ************************************ 00:50:33.918 19:43:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:50:34.175 [2024-04-18 19:43:49.910789] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:50:34.175 [2024-04-18 19:43:49.911278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154526 ] 00:50:34.175 [2024-04-18 19:43:50.095109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:50:34.434 [2024-04-18 19:43:50.326135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:34.434 [2024-04-18 19:43:50.326137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:50:35.380 Running I/O for 5 seconds... 00:50:40.741 00:50:40.741 Latency(us) 00:50:40.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:40.741 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:50:40.741 Verification LBA range: start 0x0 length 0x2000 00:50:40.741 raid5f : 5.02 6037.47 23.58 0.00 0.00 31792.63 347.18 49682.53 00:50:40.741 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:50:40.741 Verification LBA range: start 0x2000 length 0x2000 00:50:40.741 raid5f : 5.02 6396.53 24.99 0.00 0.00 30017.76 191.15 26214.40 00:50:40.741 =================================================================================================================== 00:50:40.741 Total : 12434.00 48.57 0.00 0.00 30879.48 191.15 49682.53 00:50:42.109 ************************************ 00:50:42.109 END TEST bdev_verify 00:50:42.109 ************************************ 00:50:42.109 00:50:42.110 real 0m8.045s 00:50:42.110 user 0m14.642s 00:50:42.110 sys 0m0.304s 00:50:42.110 19:43:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:42.110 19:43:57 -- common/autotest_common.sh@10 -- # set +x 00:50:42.110 19:43:57 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:50:42.110 19:43:57 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:50:42.110 19:43:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:42.110 19:43:57 -- common/autotest_common.sh@10 -- # set +x 00:50:42.110 ************************************ 00:50:42.110 START TEST bdev_verify_big_io 00:50:42.110 ************************************ 00:50:42.110 19:43:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:50:42.110 [2024-04-18 19:43:58.030345] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:50:42.110 [2024-04-18 19:43:58.030891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154638 ] 00:50:42.367 [2024-04-18 19:43:58.235996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:50:42.625 [2024-04-18 19:43:58.487235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:50:42.625 [2024-04-18 19:43:58.487241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:43.561 Running I/O for 5 seconds... 00:50:48.818 00:50:48.818 Latency(us) 00:50:48.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:48.818 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:50:48.818 Verification LBA range: start 0x0 length 0x200 00:50:48.818 raid5f : 5.22 376.47 23.53 0.00 0.00 8253062.19 161.89 357514.48 00:50:48.818 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:50:48.818 Verification LBA range: start 0x200 length 0x200 00:50:48.818 raid5f : 5.20 377.87 23.62 0.00 0.00 8244085.83 157.99 353519.91 00:50:48.818 =================================================================================================================== 00:50:48.818 Total : 754.34 47.15 0.00 0.00 8248574.01 157.99 357514.48 00:50:50.717 ************************************ 00:50:50.717 END TEST bdev_verify_big_io 00:50:50.717 ************************************ 00:50:50.717 00:50:50.717 real 0m8.291s 00:50:50.717 user 0m15.090s 00:50:50.717 sys 0m0.316s 00:50:50.717 19:44:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:50.717 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:50:50.717 19:44:06 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:50.717 19:44:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:50:50.717 19:44:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:50.717 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:50:50.717 ************************************ 00:50:50.717 START TEST bdev_write_zeroes 00:50:50.717 ************************************ 00:50:50.717 19:44:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:50.717 [2024-04-18 19:44:06.412425] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:50:50.717 [2024-04-18 19:44:06.412932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154784 ] 00:50:50.717 [2024-04-18 19:44:06.594623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:50.975 [2024-04-18 19:44:06.836045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:51.542 Running I/O for 1 seconds... 00:50:52.920 00:50:52.920 Latency(us) 00:50:52.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:52.920 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:50:52.920 raid5f : 1.01 21030.55 82.15 0.00 0.00 6063.58 1661.81 6834.47 00:50:52.920 =================================================================================================================== 00:50:52.920 Total : 21030.55 82.15 0.00 0.00 6063.58 1661.81 6834.47 00:50:54.842 ************************************ 00:50:54.842 END TEST bdev_write_zeroes 00:50:54.842 ************************************ 00:50:54.842 00:50:54.842 real 0m4.025s 00:50:54.842 user 0m3.650s 00:50:54.842 sys 0m0.252s 00:50:54.842 19:44:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:54.842 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:50:54.842 19:44:10 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:54.842 19:44:10 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:50:54.842 19:44:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:54.842 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:50:54.842 ************************************ 00:50:54.842 START TEST bdev_json_nonenclosed 00:50:54.842 ************************************ 00:50:54.842 19:44:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:54.842 [2024-04-18 19:44:10.512614] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:50:54.842 [2024-04-18 19:44:10.513020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154877 ] 00:50:54.842 [2024-04-18 19:44:10.679389] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:55.101 [2024-04-18 19:44:10.918268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:55.101 [2024-04-18 19:44:10.918614] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:50:55.101 [2024-04-18 19:44:10.918794] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:50:55.101 [2024-04-18 19:44:10.918862] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:50:55.667 ************************************ 00:50:55.667 END TEST bdev_json_nonenclosed 00:50:55.667 ************************************ 00:50:55.667 00:50:55.667 real 0m0.990s 00:50:55.667 user 0m0.781s 00:50:55.667 sys 0m0.108s 00:50:55.667 19:44:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:55.667 19:44:11 -- common/autotest_common.sh@10 -- # set +x 00:50:55.667 19:44:11 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:55.667 19:44:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:50:55.667 19:44:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:50:55.667 19:44:11 -- common/autotest_common.sh@10 -- # set +x 00:50:55.667 ************************************ 00:50:55.667 START TEST bdev_json_nonarray 00:50:55.667 ************************************ 00:50:55.667 19:44:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:55.925 [2024-04-18 19:44:11.605105] Starting SPDK v24.05-pre git sha1 99b3305a5 / DPDK 23.11.0 initialization... 00:50:55.925 [2024-04-18 19:44:11.605920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154919 ] 00:50:55.925 [2024-04-18 19:44:11.805711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:56.182 [2024-04-18 19:44:12.043673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:56.182 [2024-04-18 19:44:12.044029] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:50:56.182 [2024-04-18 19:44:12.044167] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:50:56.182 [2024-04-18 19:44:12.044224] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:50:56.747 ************************************ 00:50:56.747 END TEST bdev_json_nonarray 00:50:56.747 ************************************ 00:50:56.747 00:50:56.747 real 0m1.027s 00:50:56.747 user 0m0.778s 00:50:56.747 sys 0m0.148s 00:50:56.747 19:44:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:56.747 19:44:12 -- common/autotest_common.sh@10 -- # set +x 00:50:56.747 19:44:12 -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:50:56.747 19:44:12 -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:50:56.747 19:44:12 -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:50:56.747 19:44:12 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:50:56.747 19:44:12 -- bdev/blockdev.sh@811 -- # cleanup 00:50:56.747 19:44:12 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:50:56.747 19:44:12 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:50:56.747 19:44:12 -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:50:56.747 19:44:12 -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:50:56.747 19:44:12 -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:50:56.747 19:44:12 -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:50:56.747 00:50:56.747 real 0m55.051s 00:50:56.747 user 1m15.373s 00:50:56.747 sys 0m4.939s 00:50:56.747 19:44:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:50:56.747 19:44:12 -- common/autotest_common.sh@10 -- # set +x 00:50:56.747 ************************************ 00:50:56.747 END TEST blockdev_raid5f 00:50:56.747 ************************************ 00:50:56.747 19:44:12 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:50:56.747 19:44:12 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:50:56.747 19:44:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:50:56.747 19:44:12 -- common/autotest_common.sh@10 -- # set +x 00:50:56.747 19:44:12 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:50:56.747 19:44:12 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:50:56.747 19:44:12 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:50:56.747 19:44:12 -- common/autotest_common.sh@10 -- # set +x 00:50:58.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:50:58.730 Waiting for block devices as requested 00:50:58.730 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:50:58.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:50:59.246 Cleaning 00:50:59.246 Removing: /var/run/dpdk/spdk0/config 00:50:59.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:50:59.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:50:59.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:50:59.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:50:59.246 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:50:59.246 Removing: /var/run/dpdk/spdk0/hugepage_info 00:50:59.246 Removing: /dev/shm/spdk_tgt_trace.pid110236 00:50:59.246 Removing: /var/run/dpdk/spdk0 00:50:59.246 Removing: /var/run/dpdk/spdk_pid109921 00:50:59.246 Removing: /var/run/dpdk/spdk_pid110236 00:50:59.246 Removing: /var/run/dpdk/spdk_pid110544 00:50:59.246 Removing: /var/run/dpdk/spdk_pid110686 00:50:59.246 Removing: /var/run/dpdk/spdk_pid110750 00:50:59.246 Removing: /var/run/dpdk/spdk_pid110926 00:50:59.246 Removing: /var/run/dpdk/spdk_pid110961 00:50:59.246 Removing: /var/run/dpdk/spdk_pid111149 00:50:59.246 Removing: /var/run/dpdk/spdk_pid111424 00:50:59.246 Removing: /var/run/dpdk/spdk_pid111645 00:50:59.246 Removing: /var/run/dpdk/spdk_pid111767 00:50:59.246 Removing: /var/run/dpdk/spdk_pid111901 00:50:59.246 Removing: /var/run/dpdk/spdk_pid112039 00:50:59.246 Removing: /var/run/dpdk/spdk_pid112194 00:50:59.246 Removing: /var/run/dpdk/spdk_pid112252 00:50:59.246 Removing: /var/run/dpdk/spdk_pid112299 00:50:59.246 Removing: /var/run/dpdk/spdk_pid112385 00:50:59.246 Removing: /var/run/dpdk/spdk_pid112543 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113128 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113233 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113334 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113354 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113561 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113581 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113805 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113832 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113917 00:50:59.246 Removing: /var/run/dpdk/spdk_pid113940 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114043 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114065 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114304 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114358 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114410 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114502 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114629 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114676 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114792 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114869 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114939 00:50:59.246 Removing: /var/run/dpdk/spdk_pid114998 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115065 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115145 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115212 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115274 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115362 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115424 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115482 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115544 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115634 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115697 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115763 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115825 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115903 00:50:59.246 Removing: /var/run/dpdk/spdk_pid115973 00:50:59.246 Removing: /var/run/dpdk/spdk_pid116038 00:50:59.246 Removing: /var/run/dpdk/spdk_pid116123 00:50:59.246 Removing: /var/run/dpdk/spdk_pid116185 00:50:59.246 Removing: /var/run/dpdk/spdk_pid116292 00:50:59.246 Removing: /var/run/dpdk/spdk_pid116462 00:50:59.504 Removing: /var/run/dpdk/spdk_pid116674 00:50:59.504 Removing: /var/run/dpdk/spdk_pid116812 00:50:59.504 Removing: /var/run/dpdk/spdk_pid116887 00:50:59.504 Removing: /var/run/dpdk/spdk_pid118264 00:50:59.504 Removing: /var/run/dpdk/spdk_pid118520 00:50:59.504 Removing: /var/run/dpdk/spdk_pid118785 00:50:59.504 Removing: /var/run/dpdk/spdk_pid118941 00:50:59.504 Removing: /var/run/dpdk/spdk_pid119132 00:50:59.504 Removing: /var/run/dpdk/spdk_pid119250 00:50:59.504 Removing: /var/run/dpdk/spdk_pid119292 00:50:59.504 Removing: /var/run/dpdk/spdk_pid119338 00:50:59.504 Removing: /var/run/dpdk/spdk_pid119886 00:50:59.504 Removing: /var/run/dpdk/spdk_pid119990 00:50:59.504 Removing: /var/run/dpdk/spdk_pid120138 00:50:59.504 Removing: /var/run/dpdk/spdk_pid120234 00:50:59.504 Removing: /var/run/dpdk/spdk_pid121616 00:50:59.504 Removing: /var/run/dpdk/spdk_pid122634 00:50:59.504 Removing: /var/run/dpdk/spdk_pid123660 00:50:59.504 Removing: /var/run/dpdk/spdk_pid124942 00:50:59.504 Removing: /var/run/dpdk/spdk_pid126165 00:50:59.504 Removing: /var/run/dpdk/spdk_pid127393 00:50:59.504 Removing: /var/run/dpdk/spdk_pid129105 00:50:59.504 Removing: /var/run/dpdk/spdk_pid130470 00:50:59.504 Removing: /var/run/dpdk/spdk_pid131825 00:50:59.504 Removing: /var/run/dpdk/spdk_pid132576 00:50:59.504 Removing: /var/run/dpdk/spdk_pid133173 00:50:59.504 Removing: /var/run/dpdk/spdk_pid133858 00:50:59.504 Removing: /var/run/dpdk/spdk_pid134362 00:50:59.504 Removing: /var/run/dpdk/spdk_pid134991 00:50:59.504 Removing: /var/run/dpdk/spdk_pid135603 00:50:59.504 Removing: /var/run/dpdk/spdk_pid136357 00:50:59.504 Removing: /var/run/dpdk/spdk_pid136939 00:50:59.504 Removing: /var/run/dpdk/spdk_pid138513 00:50:59.504 Removing: /var/run/dpdk/spdk_pid139185 00:50:59.504 Removing: /var/run/dpdk/spdk_pid139796 00:50:59.504 Removing: /var/run/dpdk/spdk_pid141489 00:50:59.504 Removing: /var/run/dpdk/spdk_pid142242 00:50:59.504 Removing: /var/run/dpdk/spdk_pid142921 00:50:59.504 Removing: /var/run/dpdk/spdk_pid143809 00:50:59.504 Removing: /var/run/dpdk/spdk_pid143876 00:50:59.504 Removing: /var/run/dpdk/spdk_pid143959 00:50:59.504 Removing: /var/run/dpdk/spdk_pid144018 00:50:59.504 Removing: /var/run/dpdk/spdk_pid144194 00:50:59.505 Removing: /var/run/dpdk/spdk_pid144350 00:50:59.505 Removing: /var/run/dpdk/spdk_pid144605 00:50:59.505 Removing: /var/run/dpdk/spdk_pid144924 00:50:59.505 Removing: /var/run/dpdk/spdk_pid144960 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145025 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145056 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145085 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145143 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145171 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145211 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145250 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145299 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145331 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145371 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145402 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145453 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145492 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145531 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145563 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145619 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145650 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145679 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145743 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145780 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145845 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145941 00:50:59.505 Removing: /var/run/dpdk/spdk_pid145998 00:50:59.505 Removing: /var/run/dpdk/spdk_pid146030 00:50:59.505 Removing: /var/run/dpdk/spdk_pid146107 00:50:59.505 Removing: /var/run/dpdk/spdk_pid146138 00:50:59.505 Removing: /var/run/dpdk/spdk_pid146164 00:50:59.505 Removing: /var/run/dpdk/spdk_pid146233 00:50:59.505 Removing: /var/run/dpdk/spdk_pid146289 00:50:59.505 Removing: /var/run/dpdk/spdk_pid146342 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146369 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146398 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146443 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146473 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146502 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146530 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146555 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146631 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146689 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146716 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146770 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146802 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146849 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146921 00:50:59.762 Removing: /var/run/dpdk/spdk_pid146948 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147000 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147059 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147084 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147113 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147137 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147184 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147212 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147237 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147349 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147466 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147662 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147701 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147764 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147850 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147888 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147926 00:50:59.762 Removing: /var/run/dpdk/spdk_pid147962 00:50:59.762 Removing: /var/run/dpdk/spdk_pid148022 00:50:59.762 Removing: /var/run/dpdk/spdk_pid148063 00:50:59.762 Removing: /var/run/dpdk/spdk_pid148160 00:50:59.762 Removing: /var/run/dpdk/spdk_pid148236 00:50:59.762 Removing: /var/run/dpdk/spdk_pid148315 00:50:59.762 Removing: /var/run/dpdk/spdk_pid148631 00:50:59.762 Removing: /var/run/dpdk/spdk_pid148780 00:50:59.762 Removing: /var/run/dpdk/spdk_pid148857 00:50:59.762 Removing: /var/run/dpdk/spdk_pid148961 00:50:59.762 Removing: /var/run/dpdk/spdk_pid149060 00:50:59.762 Removing: /var/run/dpdk/spdk_pid149121 00:50:59.762 Removing: /var/run/dpdk/spdk_pid149426 00:50:59.762 Removing: /var/run/dpdk/spdk_pid149553 00:50:59.762 Removing: /var/run/dpdk/spdk_pid149692 00:50:59.762 Removing: /var/run/dpdk/spdk_pid149752 00:50:59.762 Removing: /var/run/dpdk/spdk_pid149794 00:50:59.762 Removing: /var/run/dpdk/spdk_pid149893 00:50:59.763 Removing: /var/run/dpdk/spdk_pid150401 00:50:59.763 Removing: /var/run/dpdk/spdk_pid150474 00:50:59.763 Removing: /var/run/dpdk/spdk_pid150831 00:50:59.763 Removing: /var/run/dpdk/spdk_pid150943 00:50:59.763 Removing: /var/run/dpdk/spdk_pid151083 00:50:59.763 Removing: /var/run/dpdk/spdk_pid151175 00:50:59.763 Removing: /var/run/dpdk/spdk_pid151218 00:50:59.763 Removing: /var/run/dpdk/spdk_pid151254 00:50:59.763 Removing: /var/run/dpdk/spdk_pid152760 00:50:59.763 Removing: /var/run/dpdk/spdk_pid152939 00:50:59.763 Removing: /var/run/dpdk/spdk_pid152943 00:50:59.763 Removing: /var/run/dpdk/spdk_pid152960 00:50:59.763 Removing: /var/run/dpdk/spdk_pid153477 00:50:59.763 Removing: /var/run/dpdk/spdk_pid153623 00:50:59.763 Removing: /var/run/dpdk/spdk_pid153807 00:50:59.763 Removing: /var/run/dpdk/spdk_pid153902 00:50:59.763 Removing: /var/run/dpdk/spdk_pid153963 00:50:59.763 Removing: /var/run/dpdk/spdk_pid154295 00:50:59.763 Removing: /var/run/dpdk/spdk_pid154526 00:50:59.763 Removing: /var/run/dpdk/spdk_pid154638 00:50:59.763 Removing: /var/run/dpdk/spdk_pid154784 00:50:59.763 Removing: /var/run/dpdk/spdk_pid154877 00:50:59.763 Removing: /var/run/dpdk/spdk_pid154919 00:50:59.763 Clean 00:51:00.020 19:44:15 -- common/autotest_common.sh@1437 -- # return 0 00:51:00.020 19:44:15 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:51:00.020 19:44:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:51:00.020 19:44:15 -- common/autotest_common.sh@10 -- # set +x 00:51:00.020 19:44:15 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:51:00.020 19:44:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:51:00.020 19:44:15 -- common/autotest_common.sh@10 -- # set +x 00:51:00.020 19:44:15 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:51:00.020 19:44:15 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:51:00.020 19:44:15 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:51:00.020 19:44:15 -- spdk/autotest.sh@389 -- # hash lcov 00:51:00.020 19:44:15 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:51:00.020 19:44:15 -- spdk/autotest.sh@391 -- # hostname 00:51:00.020 19:44:15 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:51:00.277 geninfo: WARNING: invalid characters removed from testname! 00:51:56.491 19:45:05 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:56.491 19:45:11 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:59.020 19:45:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:02.296 19:45:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:05.576 19:45:20 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:08.858 19:45:24 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:12.161 19:45:27 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:52:12.161 19:45:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:12.161 19:45:27 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:52:12.161 19:45:27 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:12.161 19:45:27 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:12.161 19:45:27 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:12.161 19:45:27 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:12.161 19:45:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:12.161 19:45:27 -- paths/export.sh@5 -- $ export PATH 00:52:12.161 19:45:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:12.161 19:45:27 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:52:12.161 19:45:27 -- common/autobuild_common.sh@435 -- $ date +%s 00:52:12.161 19:45:27 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713469527.XXXXXX 00:52:12.161 19:45:27 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713469527.AmIphC 00:52:12.161 19:45:27 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:52:12.161 19:45:27 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:52:12.161 19:45:27 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:52:12.161 19:45:27 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:52:12.161 19:45:27 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:52:12.161 19:45:27 -- common/autobuild_common.sh@451 -- $ get_config_params 00:52:12.161 19:45:27 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:52:12.161 19:45:27 -- common/autotest_common.sh@10 -- $ set +x 00:52:12.161 19:45:27 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:52:12.161 19:45:27 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:52:12.161 19:45:27 -- pm/common@17 -- $ local monitor 00:52:12.161 19:45:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:12.161 19:45:27 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=156554 00:52:12.161 19:45:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:12.161 19:45:27 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=156555 00:52:12.161 19:45:27 -- pm/common@26 -- $ sleep 1 00:52:12.161 19:45:27 -- pm/common@21 -- $ date +%s 00:52:12.161 19:45:27 -- pm/common@21 -- $ date +%s 00:52:12.161 19:45:27 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713469527 00:52:12.161 19:45:27 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713469527 00:52:12.161 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:52:12.161 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:52:12.161 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713469527_collect-vmstat.pm.log 00:52:12.161 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713469527_collect-cpu-load.pm.log 00:52:12.728 19:45:28 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:52:12.728 19:45:28 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:52:12.728 19:45:28 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:52:12.728 19:45:28 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:52:12.728 19:45:28 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:52:12.728 19:45:28 -- spdk/autopackage.sh@19 -- $ timing_finish 00:52:12.728 19:45:28 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:52:12.728 19:45:28 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:52:12.728 19:45:28 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:52:12.728 19:45:28 -- spdk/autopackage.sh@20 -- $ exit 0 00:52:12.728 19:45:28 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:52:12.728 19:45:28 -- pm/common@30 -- $ signal_monitor_resources TERM 00:52:12.728 19:45:28 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:52:12.728 19:45:28 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:12.728 19:45:28 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:52:12.728 19:45:28 -- pm/common@45 -- $ pid=156561 00:52:12.728 19:45:28 -- pm/common@52 -- $ sudo kill -TERM 156561 00:52:12.985 19:45:28 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:12.985 19:45:28 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:52:12.985 19:45:28 -- pm/common@45 -- $ pid=156562 00:52:12.985 19:45:28 -- pm/common@52 -- $ sudo kill -TERM 156562 00:52:12.985 + [[ -n 2344 ]] 00:52:12.985 + sudo kill 2344 00:52:12.985 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:52:12.996 [Pipeline] } 00:52:13.015 [Pipeline] // timeout 00:52:13.020 [Pipeline] } 00:52:13.039 [Pipeline] // stage 00:52:13.044 [Pipeline] } 00:52:13.062 [Pipeline] // catchError 00:52:13.071 [Pipeline] stage 00:52:13.074 [Pipeline] { (Stop VM) 00:52:13.088 [Pipeline] sh 00:52:13.398 + vagrant halt 00:52:17.643 ==> default: Halting domain... 00:52:27.649 [Pipeline] sh 00:52:27.962 + vagrant destroy -f 00:52:32.151 ==> default: Removing domain... 00:52:32.422 [Pipeline] sh 00:52:32.701 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest_3/output 00:52:32.709 [Pipeline] } 00:52:32.723 [Pipeline] // stage 00:52:32.727 [Pipeline] } 00:52:32.742 [Pipeline] // dir 00:52:32.747 [Pipeline] } 00:52:32.762 [Pipeline] // wrap 00:52:32.767 [Pipeline] } 00:52:32.780 [Pipeline] // catchError 00:52:32.788 [Pipeline] stage 00:52:32.790 [Pipeline] { (Epilogue) 00:52:32.805 [Pipeline] sh 00:52:33.078 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:52:55.075 [Pipeline] catchError 00:52:55.077 [Pipeline] { 00:52:55.095 [Pipeline] sh 00:52:55.379 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:52:55.637 Artifacts sizes are good 00:52:55.646 [Pipeline] } 00:52:55.665 [Pipeline] // catchError 00:52:55.679 [Pipeline] archiveArtifacts 00:52:55.687 Archiving artifacts 00:52:56.088 [Pipeline] cleanWs 00:52:56.099 [WS-CLEANUP] Deleting project workspace... 00:52:56.099 [WS-CLEANUP] Deferred wipeout is used... 00:52:56.104 [WS-CLEANUP] done 00:52:56.106 [Pipeline] } 00:52:56.125 [Pipeline] // stage 00:52:56.130 [Pipeline] } 00:52:56.146 [Pipeline] // node 00:52:56.151 [Pipeline] End of Pipeline 00:52:56.217 Finished: SUCCESS